Re: [DOM4] Short and Efficent DOM Traversal

2013-07-27 Thread Ojan Vafai
An alternate proposal:
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-July/040264.html.
What I like about my proposal is that it can be generalized to anything
that returns a SequenceNode and also is just less awkward than the
TreeWalker/NodeIterator interfaces.


On Sat, Jul 27, 2013 at 6:33 PM, François REMY 
francois.remy@outlook.com wrote:

 *TL/DR*: CSS Selectors represent the most commonly used way to perform
 search in the DOM. But, until now, you’ve to choose between using CSS
 (querySelectorAll) or doing incremental search (createTreeWalker). I think
 we should try to fix that.

 The proposal here would be to accept CSS selectors in replacement to the
 existing whatToShow flags {which are difficult to use and not entirely
 satisfying}, i.e. overloading the createTreeWalker/createNodeIterator
 functions to take a CSS Selector in the form of a string as a second
 parameter.


 var tw = document.createTreeWalker(document.body, “ul.menu  li”);
 while(tw.nextNode()) {
if(...) break;
...
 }


 *Advantages:*


- It’s much faster than to use a javascript function as a filter that
would call again the browser stack to find out whether a CSS selector match
or not a specific element

- We do not loose the ability to provide a more complex piece of
javascript if we really need it, but we reduced the cases where we need 
 one.

- It’s less code to write (CSS is more efficient than any piece of JS)

- It allows getting rid of the long named constants nobody likes to use


 In addition, it would open huge optimization opportunities for the browser
 (like skipping siblings that look similar, not entering into descendants if
 it’s known a certain css class is not present among them, reusing cached
 lists of elements matching a selector, or whatever).

 Thougths?



Re: [editing] nested contenteditable

2013-05-31 Thread Ojan Vafai
On Thu, May 30, 2013 at 3:52 AM, Aryeh Gregor a...@aryeh.name wrote:

 On Tue, May 28, 2013 at 8:27 PM, Travis Leithead
 travis.leith...@microsoft.com wrote:
  As far as I know, there is no actively maintained editing spec at the
  moment. Aryeh’s document is a great start but by no means should it be
  considered complete, or the standard to which you should target an
  implementation… I think we would [currently] prefer to discuss specific
  issues here on the mailing list until a regular editor can be found—so
  thanks for bringing this up!
 
 
 
  By the way, what you suggest sounds reasonable for the behavior.

 Agreed on all points, FWIW.  I'm not totally sure what the most
 sensible behavior is for backspacing into a non-editable element is,
 but selecting is a reasonable idea that the spec already recommends
 for tables (although I don't think anyone implements that point last I
 checked).  It makes it clear that the next backspace will delete the
 whole thing, which would otherwise be very surprising -- e.g., suppose
 it were a simple run of text that wasn't visually distinguishable from
 the surrounding editable content.


The main use case I can think of for mixed editability is an image with a
caption. If anyone has other use-cases, that would be helpful in reasoning
about this. http://jsfiddle.net/UAJKe/

Looking at that, I think we should make it so that a selection can never
cross an editing boundary. So, in the image caption example, put your
cursor right before the uneditable div, then:
1. Right arrow should move your cursor into the caption text.
2. Shift+right arrow should select the whole uneditable div.

And delete/backspace can just be defined as extending the selection one
position and then removing the selected DOM. Relatedly, if you are at the
beginning of the caption text and hit backspace, nothing happens because
the backspace had nothing to select (i.e. selections are contained within
their first contentEditable=true ancestor).

As to the question of whether delete/backspace should select or remove
non-editable elements, I'm not opposed to giving this a try in Chromium and
seeing if users are confused by it, but I'm skeptical it will make sense to
people.


Re: Re: Event.key complaints?

2012-12-03 Thread Ojan Vafai
On Mon, Dec 3, 2012 at 9:48 AM, Travis Leithead 
travis.leith...@microsoft.com wrote:

   When were you thinking of kicking off the DOM4 Events process?

 ** **

 I'd like to have a draft up this week. We may also ask for a FPWD if we're
 ready by the 10th. 

 ** **

 I want to have D4E rolling so that stuff we chose to punt from D3E has a
 landing pad.


+1

This will make things a lot smoother for D3E I think and allows us to avoid
stalling all DOM Event spec work while we try to finalize D3E.


 

 *From:* gary...@google.com [mailto:gary...@google.com] *On Behalf Of *Gary
 Kacmarcik (?)
 *Sent:* Friday, November 30, 2012 6:09 PM
 *To:* Travis Leithead
 *Cc:* Hallvord Reiar Michaelsen Steen; public-webapps@w3.org

 *Subject:* Re: Re: Event.key complaints?

  ** **

 On Fri, Nov 30, 2012 at 4:29 PM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  Awesome stuff Gary.

  

 (And I like that we won't need to change the behavior of key or char in
 your proposal—that part made me really nervous, since IE has shipped this
 stuff since 9, and I know our new Win8 app model is using it.)

  

 I'm planning in the short term to start a new DOM4 Events spec, which will
 be the successor to DOM3 Events. I brought this up at TPAC and there were
 no objections. Gary, I'd love you collaboration on specifying your new
 code property in that spec.

  ** **

 Sounds good to me.  I still have some comments to make on the DOM3 Events
 spec, but I'll still send them out knowing that some of them will need to
 be punted to DOM4.

 ** **

 When were you thinking of kicking off the DOM4 Events process?

 ** **

 -Gary

 ** **



Re: [Workers] Worker same-origin and usage in JS libraries...

2012-11-29 Thread Ojan Vafai
On Thu, Nov 29, 2012 at 4:31 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 17 Jul 2012, Ian Hickson wrote:
 
  My plan is to make it so that cross-origin URLs start cross-origin
  workers. The main unresolved question is how to do this in an opt-in
  manner. The best idea I've come up with so far is having scripts that
  want to opt-in to being run in such a way start with a line line:
 
 // Cross-Origin Worker for: http://example.net
 
  ...or (for multiple domains):
 
 // Cross-Origin Worker for: http://example.com https://example.org
 
  ...or (for any domain):
 
 // Cross-Origin Worker for all origins
 
  ...but that doesn't seem super neat.

 Just as an update, I still plan to do this, but I'm currently waiting for
 browser vendors to more widely implement the existing Worker,
 SharedWorker, MessagePort, and PortCollection features before adding more
 features to this part of the spec. It would also be helpful to have
 confirmation from browser vendors that y'all actually _want_ cross-origin
 workers, before I spec it.


The only difference with cross-origin workers is that they're in a
different execution environment, right? If so, seems like a good thing to
support. I don't see any downside and it doesn't sound especially difficult
to implement.



 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




Re: [Clipboard API] The before* events

2012-11-01 Thread Ojan Vafai
On Thu, Nov 1, 2012 at 4:02 AM, Travis Leithead 
travis.leith...@microsoft.com wrote:

  I'm looking at the beforecut, beforecopy and beforepaste events. I
 don't entirely understand their intent, it seems even more obscure than I
 expected..

 ** **

 I’m not sure that the use case that these events were originally designed
 for (which have been obscured by time), are at all relevant to site content
 any more. The use case of hiding the cut/copy/paste menu options, can be
 fulfilled by replacing the contextmenu with some custom one if desired.


You don't want to disable the other items in the context menu though. This
also doesn't solve disabling cut/copy/paste in non-context menus, e.g.
Chrome has these in the Chrome menu.


 

 ** **

 *From:* o...@google.com [mailto:o...@google.com] *On Behalf Of *Ojan Vafai
 *Sent:* Wednesday, October 31, 2012 10:21 PM
 *To:* Hallvord R. M. Steen
 *Cc:* WebApps WG; Ryosuke Niwa; Aryeh Gregor; Daniel Cheng; Bjoern
 Hoehrmann; Sebastian Markbåge
 *Subject:* Re: [Clipboard API] The before* events

 ** **

 On Tue, Oct 30, 2012 at 9:42 AM, Hallvord R. M. Steen hallv...@opera.com
 wrote:

 I'm looking at the beforecut, beforecopy and beforepaste events. I don't
 entirely understand their intent, it seems even more obscure than I
 expected..

 Nothing in the official MSDN documentation [1] really explains the
 interaction between beforecopy and copy (given that you can control the
 data put on the clipboard from the copy event without handling beforecopy
 at all, the demo labelled this example uses the onbeforecopy event to
 customize copy behavior doesn't really make sense to me either.)

 I was under the impression that you could handle the before* events to
 control the state of copy/cut/paste UI like menu entries. However, when
 tweaking a local copy of the MSDN code sample [2], I don't see any
 difference in IE8's UI whether the event.returnValue is set to true or
 false in the beforecopy listener.

 Another problem with using before* event to control the state of
 copy/cut/paste UI is that it only works for UI that is shown/hidden on
 demand (like menus) and not for UI that is always present (like toolbar
 buttons). I'm not aware of web browsers that have UI with copy/cut/paste
 buttons by default, but some browsers are customizable and some might have
 toolbar buttons for this.

 I'm wondering if specifying something like

 navigator.setCommandState('copy', false); // any copy UI is now disabled
 until app calls setCommandState('copy', true) or user navigates away from
 page

 would be more usable? A site/app could call that at will depending on its
 internal state. Or, if we want to handle the data type stuff, we could say

 navigator.setCommandState('paste', true,
 {types:['text/plain','text/html']});

 to enable any paste plain text and paste rich text UI in the browser?*
 ***

  ** **

 I don't have a strong opinion on the specifics of the API, but I agree
 that this is much more usable than the before* events. In the common case,
 web developers would have to listen to selectionchange/focus/blur events
 and call these methods appropriately.

 ** **

 The downside to an approach like this is that web developers can easily
 screw up and leave the cut/copy/paste items permanently enabled/disabled
 for that tab. I don't have a suggestion that avoids this though. I suppose
 you could have this state automatically get reset on every focus change.
 Then it would be on the developer to make sure to set it correctly. That's
 annoying in a different way though.

  

 -Hallvord

 [1] http://msdn.microsoft.com/en-us/library/ms536901(VS.85).aspx
 [2]
 http://samples.msdn.microsoft.com/workshop/samples/author/dhtml/refs/onbeforecopyEX.htm

 

  ** **



Re: [Clipboard API] The before* events

2012-11-01 Thread Ojan Vafai
I agree that this use case is not very important and possibly one we
shouldn't bother trying to solve. Hallvord's initial point, I think is that
there's really no use case for the before* events. We should kill them.
*If* we want to meet the use case those events purported to meet (not
displaying cut/copy/paste in menus), we should design a better API. It
sounds like noone especially cares for that use case though. I don't hear
web developers clamoring for it.


On Thu, Nov 1, 2012 at 11:12 AM, Travis Leithead 
travis.leith...@microsoft.com wrote:

  You are right, that it doesn’t solve the “disabling the option in the
 browser chrome” case—but is that really necessary? Why would a site want to
 do this?

 ** **

 The only reason I can imagine is the old “we want to prevent the casual
 user from copying this image because it is copyrighted” scenario. In the
 cut/paste interaction, there are other ways to handle this such as making
 the control read-only, or stoping the action at the keyboard event level.*
 ***

 ** **

 IE10 (and other UAs) have another solution—allow more fine-grained control
 over the management of selection (css 
 propertyhttp://msdn.microsoft.com/en-us/library/ie/hh781492(v=vs.85).aspx,
 and example usage http://ie.microsoft.com/testdrive/HTML5/msUserSelect/).
 I can imagine a similar model for specific control over cut/copy/paste from
 certain parts of the page if this is a hard requirement. The CSS property
 means that the developer’s request can be honored by the user agent without
 script getting in the way of (and possibly delaying) the action.

 ** **

 *From:* o...@google.com [mailto:o...@google.com] *On Behalf Of *Ojan Vafai
 *Sent:* Thursday, November 1, 2012 4:38 PM
 *To:* Travis Leithead
 *Cc:* Hallvord R. M. Steen; WebApps WG; Ryosuke Niwa; Aryeh Gregor;
 Daniel Cheng; Bjoern Hoehrmann; Sebastian Markbåge

 *Subject:* Re: [Clipboard API] The before* events

  ** **

 On Thu, Nov 1, 2012 at 4:02 AM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

   I'm looking at the beforecut, beforecopy and beforepaste events. I
 don't entirely understand their intent, it seems even more obscure than I
 expected..

  

 I’m not sure that the use case that these events were originally designed
 for (which have been obscured by time), are at all relevant to site content
 any more. The use case of hiding the cut/copy/paste menu options, can be
 fulfilled by replacing the contextmenu with some custom one if desired.***
 *

  ** **

 You don't want to disable the other items in the context menu though. This
 also doesn't solve disabling cut/copy/paste in non-context menus, e.g.
 Chrome has these in the Chrome menu.

  

   

 *From:* o...@google.com [mailto:o...@google.com] *On Behalf Of *Ojan Vafai
 *Sent:* Wednesday, October 31, 2012 10:21 PM
 *To:* Hallvord R. M. Steen
 *Cc:* WebApps WG; Ryosuke Niwa; Aryeh Gregor; Daniel Cheng; Bjoern
 Hoehrmann; Sebastian Markbåge
 *Subject:* Re: [Clipboard API] The before* events

  

 On Tue, Oct 30, 2012 at 9:42 AM, Hallvord R. M. Steen hallv...@opera.com
 wrote:

 I'm looking at the beforecut, beforecopy and beforepaste events. I don't
 entirely understand their intent, it seems even more obscure than I
 expected..

 Nothing in the official MSDN documentation [1] really explains the
 interaction between beforecopy and copy (given that you can control the
 data put on the clipboard from the copy event without handling beforecopy
 at all, the demo labelled this example uses the onbeforecopy event to
 customize copy behavior doesn't really make sense to me either.)

 I was under the impression that you could handle the before* events to
 control the state of copy/cut/paste UI like menu entries. However, when
 tweaking a local copy of the MSDN code sample [2], I don't see any
 difference in IE8's UI whether the event.returnValue is set to true or
 false in the beforecopy listener.

 Another problem with using before* event to control the state of
 copy/cut/paste UI is that it only works for UI that is shown/hidden on
 demand (like menus) and not for UI that is always present (like toolbar
 buttons). I'm not aware of web browsers that have UI with copy/cut/paste
 buttons by default, but some browsers are customizable and some might have
 toolbar buttons for this.

 I'm wondering if specifying something like

 navigator.setCommandState('copy', false); // any copy UI is now disabled
 until app calls setCommandState('copy', true) or user navigates away from
 page

 would be more usable? A site/app could call that at will depending on its
 internal state. Or, if we want to handle the data type stuff, we could say

 navigator.setCommandState('paste', true,
 {types:['text/plain','text/html']});

 to enable any paste plain text and paste rich text UI in the browser?*
 ***

   

 I don't have a strong opinion on the specifics of the API, but I agree
 that this is much more usable than the before

Re: Event.key complaints?

2012-11-01 Thread Ojan Vafai
WebKit does not implement key/char, but does support keyIdentifier from an
older version of the DOM 3 Events spec. It doesn't match the current key
property in a number of ways (e.g. it has  unicode values like U+0059),
but I do think it suffers from some of the same issues Hallvord mentioned.

On Thu, Nov 1, 2012 at 7:22 AM, Travis Leithead 
travis.leith...@microsoft.com wrote:

 This is great feedback, which will need to be addressed one-way or another
 before we finish DOM 3 Events.

 Are there any other implementations of key/char other than IE9  10? (And
 Opera's Alpha-channel implementation). I did a quick check in the latest
 Firefox/Chrome stable branches and couldn't detect it, but wanted to be
 sure.

  -Original Message-
  From: Hallvord R. M. Steen [mailto:hallv...@opera.com]
  Sent: Thursday, November 1, 2012 1:37 PM
  To: Ojan Vafai
  Cc: Travis Leithead; public-weba...@w3c.org
  Subject: Re: Event.key complaints?
 
  Travis wrote:
 
Hallvord, sorry I missed your IRC comment in today's meeting,
   related to
   DOM3 Events:
   ** **
   hallvord_ event.key is still a problem child, authors trying
   to use it have been complaining both to me and on the mailing
   list
   ** **
   Could you point me to the relevant discussions?
 
  To which Ojan Vafai replied:
 
   I'm not sure what specific issues Hallvord has run into, but WebKit
   implementing this property is blocked on us having a bit more
   confidence that the key/char properties won't be changing.
 
  Probably wise of you to hold off a little bit ;-), and thanks for
 pointing to
  relevant discussion threads (I pasted your links at the end).
 
  Opera has done the canary implementation of the key and char
 properties,
  according to the current spec. As such, we've received feedback from JS
  authors trying to code for the new implementation, both from internal
  employees and externals. According to this feedback, although the new
 spec
  attempts to be more i18n-friendly it is actually a step backwards
 compared to
  the event.keyCode model:
 
  If, for example, you would like to do something when the user presses
 [Ctrl]-
  [1], under the old keyCode model you could write this in a keydown
 handler:
 
  if(event.ctrlKey  event.keyCode == 49)
 
  while if you want to use the new implementation you will have to do
  something like
 
  if(event.ctrlKey  ( event.key == 1 || event.key == '' || event.key ==
 '1' ))
 
  and possibly even more variations, depending on what locales you want to
  support. (That's three checks for English ASCII, French AZERTY and
 Japanese
  hiragana wide character form layouts respectively - I don't know of
 other
  locales that assign other character values to this key but they might
 exist).
  Obviously, this makes it orders of magniture harder to write cross-locale
  applications and places a large burden of complexity on JS authors.
 
  In the current spec, event.key and event.char are actually aliases of
 each
  other for most keys on the keyboard! If the key you press doesn't have a
  key name string, event.key and event.char are spec'ed as being the same
  value [1].
 
  This aliasing doesn't really add up to a clear concept. If two
 properties have
  the same value almost always, why do we add *two* new properties in the
  first place?
 
  This is also the underlying cause for other reported problems with the
 new
  model, like the inability to match [Shift]-[A] keydown/up events because
  event.key might be a in keydown but A in keyup or vice versa.
 
  I would like the story of event.char and event.key to be that
 event.char
  describes the generated character (if any) in its
  shifted/unshifted/modified/localized glory while event.key describes the
  key (perhaps on a best-effort basis, but in a way that is at least as
 stable and
  usable as event.keyCode).
 
  Hence, what I think would be most usable in the real world would be
 making
  event.key a mapping back to un-shifted character values of a normal
 QUERTY
  (en-US) layout. Authors are asking for stable reference values for
 identifying
  keys, and that's the most stable and widely known reference keyboard
  layout.
 
  Alternatively, we can spec that event.key describes the un-shifted, un-
  modified key state from the current keyboard layout AND standardise
  event.keyCode they way it's already implemented rather than deprecating
 it,
  because it covers some use cases better than what our new stuff does. But
  my preference would be going the event.key = QUERTY (en-US) route, and I
  plan to report a bug or two on making that happen.
  -Hallvord
 
  [1] Spec describes event.key as follows: If the value is has a printed
  representation, it must match the value of the KeyboardEvent.char
  attribute
  http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-
  Events.html#events-KeyboardEvent-key
 
  http://lists.w3.org/Archives/Public/www-dom/2012OctDec/0010.html
  http://lists.w3.org/Archives

Re: Event.key complaints?

2012-10-31 Thread Ojan Vafai
I'm not sure what specific issues Hallvord has run into, but WebKit
implementing this property is blocked on us having a bit more confidence
that the key/char properties won't be changing. Specifically, I'd like to
see some rough resolution to the following threads:
http://lists.w3.org/Archives/Public/www-dom/2012OctDec/0010.html
http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0713.html
http://lists.w3.org/Archives/Public/www-dom/2012JulSep/0103.html
http://lists.w3.org/Archives/Public/www-dom/2012OctDec/0030.html

I'd be fine pushing the USB codes off to level 4, except it's not clear to
me that we'd want the key/char properties to stay as is if we added the USB
codes.


On Mon, Oct 29, 2012 at 2:26 PM, Travis Leithead 
travis.leith...@microsoft.com wrote:

  Hallvord, sorry I missed your IRC comment in today’s meeting, related to
 DOM3 Events:

 ** **

 hallvord_ event.key is still a problem child, authors trying

 to use it have been complaining both to me and on the mailing

 list

 ** **

 Could you point me to the relevant discussions? The only issues with key
 that I’ve tracked relating to the spec are regarding control key usage in
 International keyboard layouts:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=18341. 

 ** **

 Thanks,

 -Travis



Re: [Clipboard API] The before* events

2012-10-31 Thread Ojan Vafai
On Tue, Oct 30, 2012 at 9:42 AM, Hallvord R. M. Steen hallv...@opera.comwrote:

 I'm looking at the beforecut, beforecopy and beforepaste events. I don't
 entirely understand their intent, it seems even more obscure than I
 expected..

 Nothing in the official MSDN documentation [1] really explains the
 interaction between beforecopy and copy (given that you can control the
 data put on the clipboard from the copy event without handling beforecopy
 at all, the demo labelled this example uses the onbeforecopy event to
 customize copy behavior doesn't really make sense to me either.)

 I was under the impression that you could handle the before* events to
 control the state of copy/cut/paste UI like menu entries. However, when
 tweaking a local copy of the MSDN code sample [2], I don't see any
 difference in IE8's UI whether the event.returnValue is set to true or
 false in the beforecopy listener.

 Another problem with using before* event to control the state of
 copy/cut/paste UI is that it only works for UI that is shown/hidden on
 demand (like menus) and not for UI that is always present (like toolbar
 buttons). I'm not aware of web browsers that have UI with copy/cut/paste
 buttons by default, but some browsers are customizable and some might have
 toolbar buttons for this.

 I'm wondering if specifying something like

 navigator.setCommandState('**copy', false); // any copy UI is now
 disabled until app calls setCommandState('copy', true) or user navigates
 away from page

 would be more usable? A site/app could call that at will depending on its
 internal state. Or, if we want to handle the data type stuff, we could say

 navigator.setCommandState('**paste', true, {types:['text/plain','text/**
 html']});

 to enable any paste plain text and paste rich text UI in the browser?


I don't have a strong opinion on the specifics of the API, but I agree that
this is much more usable than the before* events. In the common case, web
developers would have to listen to selectionchange/focus/blur events and
call these methods appropriately.

The downside to an approach like this is that web developers can easily
screw up and leave the cut/copy/paste items permanently enabled/disabled
for that tab. I don't have a suggestion that avoids this though. I suppose
you could have this state automatically get reset on every focus change.
Then it would be on the developer to make sure to set it correctly. That's
annoying in a different way though.


 -Hallvord

 [1] 
 http://msdn.microsoft.com/en-**us/library/ms536901(VS.85).**aspxhttp://msdn.microsoft.com/en-us/library/ms536901(VS.85).aspx
 [2] http://samples.msdn.microsoft.**com/workshop/samples/author/**
 dhtml/refs/onbeforecopyEX.htmhttp://samples.msdn.microsoft.com/workshop/samples/author/dhtml/refs/onbeforecopyEX.htm





Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-22 Thread Ojan Vafai
On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com wrote:

 Ryosuke also raised the possibility of multiple text fields having
 separate UndoManagers. On Mac, most apps wipe they undo queue when you
 change text field focus. WebKit preserves a single undo queue across text
 fields, so that tabbing out does not kill your ability to undo. I don't
 know of any app where you get separate switchable persistent undo queues.
 Thins are similar on iOS.


Think of the use-case of a threaded email client where you can reply to any
message in the thread. If it shows your composing mails inline (e.g. as
gmail does), the most common user expectation IMO is that each email gets
it's own undo stack. If you undo the whole stack in one email you wouldn't
expect the next undo to start undo stuff in another composing mail. In
either case, since there's a simple workaround (seamless iframes), I don't
think we need the added complexity of the attribute.


 Firefox in Windows has a separate undo list for each input.  I would find
 a single undo list strange.


 Internet Explorer and WebKit don't.

 While we're probably all biased to think that what we're used to is the
 best behavior, it's important to design our API so that implementors need
 not to violate platform conventions. In this case, it might mean that
 whether text field has its own undo manager by default depends on the
 platform convention.


Also, another option is that we could allow shadow DOMs to have their own
undo stack. So, you can make a control that has it's own undo stack if you
want.


Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Ojan Vafai
On Tue, Aug 21, 2012 at 11:17 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 I recently participated in an internal thread at Google where it was
 proposed to move a (webkit-specific) feature from an attribute to a
 CSS property, because applying it via a property is *much* more
 convenient.

 Similarly, some of the a11y folks have recently been talking about
 applying aria-* attributes via CSS, again, because it's just so much
 more convenient.

 I think there are probably a lot of examples of this, where something
 definitely belongs in an attribute, but it would just be so *nice* to
 set it with something like CSS, where you declare it once and
 everything just works.  For example, inline event handlers!

 Given all this, I have a proposal for a method of doing that.  It's
 very similar to CSS and gives most of the benefits, but should avoid
 some pitfalls that have sunk similar proposals in the past.

 Cascading Attribute Sheets
 ===

 CAS is a language using the same basic syntax of CSS, meant for easily
 applying attributes to an HTML page.

 To use it, include a CAS file using script type=text/cas.  (I
 chose script over style, even though it resembles CSS, because it
 acts more like a script - it's run once, no dynamic mutations to the
 file, etc.)  CAS scripts are automatically async.

 The basic grammar of CAS is identical to CSS.  Your attribute sheet
 contains rules, composed of selectors, curly braces, and declarations,
 like so:

 video {
   preload: metadata;
 }
 #content video {
   preload: auto;
 }

 In the place where CSS normally has a property name, CAS allows any
 attribute name.  In the value, CAS allows any single token, such as a
 number, identifier, or string.  If the value is a string, it's used as
 the attribute's value.  Otherwise, the value is serialized using CSS
 serialization rules, and that is used as the attribute's value.

 There are three special values you may use instead to set the value:
 !on sets an attribute to its name (shorthand for boolean attributes),
 !off removes an attribute (boolean or not), and !initial does nothing
 (it's used to cancel any changes that other, less specific, rules may
 be attempting to make, and is the initial value of all properties).

 CAS files are *not* as dynamic as CSS.  They do not respond to
 arbitrary document changes.  (They *can't*, otherwise you have
 dependency cycles with an attribute selector rule removing the
 attribute, etc.)  My thought right now is that your CAS is only
 applied to elements when they are inserted into the DOM (this also
 applies to any parser-created elements in the page).  This allows us
 to keep information tracking to a bare minimum - we don't need to
 track what attributes came from CAS vs the markup or setAttribute()
 calls, we don't need to define precedence between CAS and other
 sources, and we don't need to do any of the fancy coalescing and
 whatnot that CSS changes require.  Semantically, a CAS file should be
 exactly equivalent to a script that does
 document.querySelectorAll(selector).forEach(do attribute
 mutations);, plus a mutation observer that reruns the mutations on
 any nodes added to the document.  It's just a much more convenient way
 to express these.


Meh. I think this loses most of the CSS is so much more convenient
benefits. It's mainly the fact that you don't have to worry about whether
the nodes exist yet that makes CSS more convenient.

That said, I share your worry that having this be dynamic would slow down
DOM modification too much.

What if we only allowed a restricted set of selectors and made these sheets
dynamic instead? Simple, non-pseudo selectors have information that is all
local to the node itself (e.g. can be applied before the node is in the
DOM). Maybe even just restrict it to IDs and classes. I think that would
meet the majority use-case much better.

Alternately, what if these applied the attributes asynchronously (e.g.
right before style resolution)?


 (Slight weirdness here - a CAS file can reset its own @src attribute
 to load *another* CAS file.  A script can do the same, though.
 Acceptable or not?)

 I think we should allow the CSS Conditional rules as well
 http://dev.w3.org/csswg/css3-conditional/ - at least Media Queries,
 but @document seems useful as well, and @supports may even by
 justifiable (it would need some definition work to make it usable for
 CAS, though).  Again, these aren't responsive to live changes, like MQ
 are in CSS, but they let you respond to the initial document condition
 and apply attributes accordingly.



 Thoughts?  I tried to make this as simple as possible while still
 being useful, so that it's easy to implement and to understand.
 Hopefully I succeeded!

 ~TJ




Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Ojan Vafai
On Tue, Aug 21, 2012 at 1:01 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Aug 21, 2012 at 12:37 PM, Ojan Vafai o...@chromium.org wrote:
  Meh. I think this loses most of the CSS is so much more convenient
  benefits. It's mainly the fact that you don't have to worry about whether
  the nodes exist yet that makes CSS more convenient.

 Note that this benefit is preserved.  Moving or inserting an element
 in the DOM should apply CAS to it.

 The only thing we're really losing in the dynamic-ness is that other
 types of mutations to the DOM don't change what CAS does, and some of
 the dynamic selectors like :hover don't do anything.


Ah, I missed the plus a mutation observer that reruns the mutations on any
nodes added to the document bit. Ok, so this timing is very specific then.
It would get applied at the microtask time, not at the time the DOM was
modified. Would it get applied before or after mutation observers get
called? Seems like you'd want it to execute first. Calling it after
mutation observers would require an extra delivery of mutations after the
attributes are applied, which seems silly.

 That said, I share your worry that having this be dynamic would slow down
  DOM modification too much.
 
  What if we only allowed a restricted set of selectors and made these
 sheets
  dynamic instead? Simple, non-pseudo selectors have information that is
 all
  local to the node itself (e.g. can be applied before the node is in the
  DOM). Maybe even just restrict it to IDs and classes. I think that would
  meet the majority use-case much better.

 I think that being able to use complex selectors is a sufficiently
 large use-case that we should keep it.

  Alternately, what if these applied the attributes asynchronously (e.g.
 right
  before style resolution)?

 Can you elaborate?

 ~TJ



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Ojan Vafai
On Tue, Aug 21, 2012 at 1:58 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Aug 21, 2012 at 1:42 PM, Ojan Vafai o...@chromium.org wrote:
  On Tue, Aug 21, 2012 at 1:01 PM, Tab Atkins Jr. jackalm...@gmail.com
  wrote:
  On Tue, Aug 21, 2012 at 12:37 PM, Ojan Vafai o...@chromium.org wrote:
   Meh. I think this loses most of the CSS is so much more convenient
   benefits. It's mainly the fact that you don't have to worry about
   whether
   the nodes exist yet that makes CSS more convenient.
 
  Note that this benefit is preserved.  Moving or inserting an element
  in the DOM should apply CAS to it.
 
  The only thing we're really losing in the dynamic-ness is that other
  types of mutations to the DOM don't change what CAS does, and some of
  the dynamic selectors like :hover don't do anything.
 
 
  Ah, I missed the plus a mutation observer that reruns the mutations on
 any
  nodes added to the document bit. Ok, so this timing is very specific
 then.
  It would get applied at the microtask time, not at the time the DOM was
  modified. Would it get applied before or after mutation observers get
  called? Seems like you'd want it to execute first. Calling it after
 mutation
  observers would require an extra delivery of mutations after the
 attributes
  are applied, which seems silly.

 I presume there's an ordering of mutation observers, such that ones
 defined earlier in document history get the notifications first, or
 somesuch?


Correct.


  If so, CAS should indeed run before any author-defined
 observers.


On a somewhat unrelated note, could we somehow also incorporate jquery
style live event handlers here? See previous www-dom discussion about this:
. I suppose we'd still just want listen/unlisten(selector, handler)
methods, but they'd get applied at the same time as cascaded attributes.
Although, we might want to apply those on attribute changes as well.


Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-11 Thread Ojan Vafai
Another thing to consider if we add DOMTransaction back in is that you now
need to specifiy what happens in more cases, e.g.:
-call transact on the same DOMTransaction twice
-call transact on a DOMTransaction then modify undo/redo listeners

These are solvable problems, but are just more complicated than using a
dictionary. I really see no point in adding DOMTransaction back. If you
want, you could make transact just take the arguments you want
DOMTransaction to take. Then of course, you end up in the case of needing a
bunch of optional arguments due to automatic vs. manual transactions, which
leads us back to using a dictionary.

An alternative interface I'd be happy with would be transact(String label,
Dictionary optionalArguments) since label is always required.


On Mon, Jul 9, 2012 at 10:19 AM, Ryosuke Niwa rn...@webkit.org wrote:

 On Mon, Jul 9, 2012 at 9:52 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Mon, Jul 9, 2012 at 9:41 AM, Ryosuke Niwa rn...@webkit.org wrote:
  On Mon, Jul 9, 2012 at 7:30 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Fri, Jul 6, 2012 at 12:09 PM, Ryosuke Niwa rn...@webkit.org
 wrote:
   Okay, it appears to be a miscommunication of terminology here. What I
   meant
   is DOM APIs in your definition. As Adam said, API fashions come and
 go.
   We
   should be making new DOM APIs consistent with existing DOM APIs.
 
  While API fashions do change, this doesn't appear to be a fashion.  A
  ton of popular libraries have been doing this for years, and it
  doesn't seem to be flagging.  Language-supported variations of this
  pattern exist in other popular languages as well, such as Python.
 
  Yeah, and Python does all other things I don't like e.g. using
 indentation
  instead of { } or begin/end.

 Irrelevant to the discussion.  Nobody's suggesting to adopt other
 Pythonisms.  I was just pointing to Python to support my assertion
 that this is now a very popular pattern, not a transient fashion.


 And I was saying that just because Python does it, it doesn't mean it's
 desirable nor does it support your assertion.

  Also, we're not forced to use named arguments
  in Python. If we have a function like:
 
  def transact(execute, undo, redo):
  ~~
 
  you can call it as either:
  transact(execute=a, undo=b, redo=c)
  or
  transact(a, b, c)

 Yes, and it would be wonderful if JS had similar support so that we
 could easily accommodate both patterns.  It doesn't, and so we have to
 choose one or the other.


 Yes, but that should be addressed in TC39.

   On the other hand, other popular programming languages like Java and C++
  don't support named arguments.

 Of course.  They, like the DOM APIs, are old (hell, C++ is older than
 you or me).  Evolution happens.


  I personally find named arguments in various library functions
  annoying
  especially when there are only few arguments. e.g.
  parent.appendChild({child: node}); would be silly.
 
  Come on, Ryosuke, don't strawman.  Obviously when there's only a
  single argument, there's no concerns about remembering the order.  Two
  args are often fine, but PHP shows us quite clearly that's it's easy
  to screw that up and make it hard to remember (see the persistent
  confusion in PHP search functions about whether the needle or haystack
  comes first).  Three and up, you should definitely use named args
  unless there's a strong logical order.  (For example, an API taking a
  source and dest rectangle can reasonably support 8 ordered args,
  because x,y,w,h and source,dest are strong logical orders.)
 
  In our case, there's a clear logical ordering: execute, undo, redo.

 I don't get it.  That doesn't seem like a logical ordering to me.


 It is because that's the order in which these functions can be called. You
 can't call redo before calling undo, and you can't call undo before execute.

 By that I mean, there's no clear reason for me to assume, having not used
 this API before, that they should appear in that order, nor is there
 an existing well-established pattern across many other APIs that they
 appear in that particular order.


 Undo appearing before Redo is a very well established pattern:
 http://www.cs.mcgill.ca/~hv/classes/CS400/01.hchen/doc/command/command.html

 https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSUndoManager_Class/Reference/Reference.html

 Having execute is somewhat odd because there are no other frameworks in
 which such a function appears in an object that represents undo/redo.
 That's another reason I like Yuval's proposal over the current design.

 - Ryosuke




Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-11 Thread Ojan Vafai
On Wed, Jul 11, 2012 at 11:19 AM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Jul 11, 2012 at 10:36 AM, Ojan Vafai o...@chromium.org wrote:

 Another thing to consider if we add DOMTransaction back in is that you
 now need to specifiy what happens in more cases, e.g.:
 -call transact on the same DOMTransaction twice
 -call transact on a DOMTransaction then modify undo/redo listeners


 You mean execute? Assuming that, the first one is a good point. We have
 the second problem already with the dictionary interface because scripts
 can add and or remove execute/executeAutomatic/undo/redo from the
 dictionary within those functions or between transact/undo/redo.


We don't have this problem with the dictionary interface because we don't
store the actual dictionary. We take the dictionary and construct a C++
(DOMTransaction?) entry into the undomanager. So, if you reuse the same
dictionary, you get a new C++ object stored for each transact call. If you
modify the dictionary after the transact call, it does not affect the
stored C++ object.

These are solvable problems, but are just more complicated than using a
 dictionary. I really see no point in adding DOMTransaction back.


 The point of adding them back is so that undo/redo are implemented as
 events like any other DOM API we have.


I don't see the benefit here. I don't think this is a more author-friendly
API.


 If you want, you could make transact just take the arguments you want
 DOMTransaction to take. Then of course, you end up in the case of needing a
 bunch of optional arguments due to automatic vs. manual transactions, which
 leads us back to using a dictionary.

 An alternative interface I'd be happy with would be transact(String
 label, Dictionary optionalArguments) since label is always required.


 Well, label isn't always required; e.g. when a transaction is merged with
 previous transaction.


Doesn't that make DOMTransaction(label, execute) a problem? I suppose it
could be DOMTransaction(execute, optional label).

Speaking of merge, why is it on transact and not on the transaction
dictionary. The latter makes more sense to me.

Ojan


Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-11 Thread Ojan Vafai
On Wed, Jul 11, 2012 at 3:23 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Jul 11, 2012 at 3:12 PM, Ojan Vafai o...@chromium.org wrote:

 On Wed, Jul 11, 2012 at 11:19 AM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Jul 11, 2012 at 10:36 AM, Ojan Vafai o...@chromium.org wrote:

 Another thing to consider if we add DOMTransaction back in is that you
 now need to specifiy what happens in more cases, e.g.:
 -call transact on the same DOMTransaction twice
 -call transact on a DOMTransaction then modify undo/redo listeners


 You mean execute? Assuming that, the first one is a good point. We have
 the second problem already with the dictionary interface because scripts
 can add and or remove execute/executeAutomatic/undo/redo from the
 dictionary within those functions or between transact/undo/redo.


 We don't have this problem with the dictionary interface because we don't
 store the actual dictionary. We take the dictionary and construct a C++
 (DOMTransaction?) entry into the undomanager. So, if you reuse the same
 dictionary, you get a new C++ object stored for each transact call. If you
 modify the dictionary after the transact call, it does not affect the
 stored C++ object.


 I don't follow. Your second point is that it's ambiguous as to what should
 happen when undo/redo event listeners are modified. I'm saying that the
 same ambiguity rises when the script modifies undo/redo properties of the
 pure JS object that implements DOMTransaction because we don't store
 undo/redo properties in a separate C++ (or whatever language you're
 implementing UA with) object inside transact().

 By the way, let us refrain from using the word dictionary here because
 there is a specific object named Dictionary in WebIDL and multiple folks on
 IRC have told me that our use of the term dictionary is confusing. For
 further clarify, the dictionary in the current specification is an user
 object implementing the DOMTransaction callback interface as defined in:
 http://www.w3.org/TR/WebIDL/#es-user-objects It can't be a Dictionary
 because execute, executeAutomatic, undo, and redo need to be called on the
 object; e.g. if we had t = {execute: function () { this.undo = function ()
 { ~}; }}}, then execute will change the undo IDL attribute of t.


I was specifically talking about WebIDL Dictionaries. In which case,
this.undo would set the undo property on the window. Why would you want to
be able to set the undo function from the execut function? My whole point
has been that we should not keep the actual object.


  These are solvable problems, but are just more complicated than using a
 dictionary. I really see no point in adding DOMTransaction back.


 The point of adding them back is so that undo/redo are implemented as
 events like any other DOM API we have.


 I don't see the benefit here. I don't think this is a more
 author-friendly API.


 Consistency.


It's more consistent in some ways and less in others. There aren't many
instances of constructing an object and passing it to a method. There are
increasingly many methods that take Dictionaries in. There are many methods
that take pure callbacks. You have to squint to call this more consistent
IMO.

 Speaking of merge, why is it on transact and not on the transaction
 dictionary. The latter makes more sense to me.


 Because of the fact DOMTransaction had been a user object. It would be
 awfully confusing if the script could override the value of merge. If we
 had re-introduced the DOMTransaction interface, then we can make merge a
 readonly attribute on the object.


To clarify my position here, I think transact should take in a Dictionary.
Then the useragent converts that into a DOMTransaction and stores it in the
UndoManager's history. So, item would return a DOMTransaction. In that
case, merge could be a value in the Dictionary and readonly on the
DOMTransaction.

The only part we disagree about I think is whether transact should take a
Dictionary or a DOMTransaction. Storing a DOMTransaction and returning it
from item, seems reasonable to me.

Ojan


Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-11 Thread Ojan Vafai
On Wed, Jul 11, 2012 at 3:47 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Jul 11, 2012 at 3:35 PM, Ojan Vafai o...@chromium.org wrote:

 On Wed, Jul 11, 2012 at 3:23 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Jul 11, 2012 at 3:12 PM, Ojan Vafai o...@chromium.org wrote:

 We don't have this problem with the dictionary interface because we
 don't store the actual dictionary. We take the dictionary and construct a
 C++ (DOMTransaction?) entry into the undomanager. So, if you reuse the same
 dictionary, you get a new C++ object stored for each transact call. If you
 modify the dictionary after the transact call, it does not affect the
 stored C++ object.


 I don't follow. Your second point is that it's ambiguous as to what
 should happen when undo/redo event listeners are modified. I'm saying that
 the same ambiguity rises when the script modifies undo/redo properties of
 the pure JS object that implements DOMTransaction because we don't store
 undo/redo properties in a separate C++ (or whatever language you're
 implementing UA with) object inside transact().

 By the way, let us refrain from using the word dictionary here because
 there is a specific object named Dictionary in WebIDL and multiple folks on
 IRC have told me that our use of the term dictionary is confusing. For
 further clarify, the dictionary in the current specification is an user
 object implementing the DOMTransaction callback interface as defined in:
 http://www.w3.org/TR/WebIDL/#es-user-objects It can't be a Dictionary
 because execute, executeAutomatic, undo, and redo need to be called on the
 object; e.g. if we had t = {execute: function () { this.undo = function ()
 { ~}; }}}, then execute will change the undo IDL attribute of t.


 I was specifically talking about WebIDL Dictionaries. In which case,
 this.undo would set the undo property on the window. Why would you want to
 be able to set the undo function from the execut function? My whole point
 has been that we should not keep the actual object.


 Oh, then you're proposing something different here. There is a use case
 for being able to add custom properties (for extra information used by
 library, etc...) and store that in the undo manager. If we had used
 Dictionary for, say, the constructor of DOMTransaction, then you'd have to
 add those properties after creating DOMTransaction. If you're proposing to
 use Dictionary for transact() so that we don't keep any objects, then that
 just doesn't work.


What are the cases where you need this? Closures over your
execute/undo/redo methods seem to me like it would address this use-case
fine.

  Speaking of merge, why is it on transact and not on the transaction
 dictionary. The latter makes more sense to me.


 Because of the fact DOMTransaction had been a user object. It would be
 awfully confusing if the script could override the value of merge. If we
 had re-introduced the DOMTransaction interface, then we can make merge a
 readonly attribute on the object.


 To clarify my position here, I think transact should take in a
 Dictionary. Then the useragent converts that into a DOMTransaction and
 stores it in the UndoManager's history. So, item would return a
 DOMTransaction. In that case, merge could be a value in the Dictionary and
 readonly on the DOMTransaction.


 So one big use pattern we're predicting is something along the line of:

 undoManager.transact(transactionFactory(~~));

 where transactionFactory can create a custom DOM transaction object. Now
 all properties set on the object returned by transactionFactory will be
 lost when execute/executeAutomatic/undo/redo are called in your proposal,
 and that's very counter intuitive and weird in my opinion.


We disagree on what's counterintuitive.



 - Ryosuke




Re: [UndoManager] What should a native automatic transaction expose?

2012-07-06 Thread Ojan Vafai
On Wed, Jul 4, 2012 at 3:43 PM, Ryosuke Niwa rn...@webkit.org wrote:

 Hi,

 In section 3.3 [1], we mention that the user editing actions and drag and
 drop need to be implemented as automatic DOM transactions. But it seems odd
 to expose executeAutomatic function in this case especially for drag  drop.

 I propose to set executeAutomatic null for those native automatic
 transactions but still expose label (set to whatever the Web browsers use
 in the edit menu).


Can we do the same for execute? We don't need to hold on to the execute
function once we've done the initial transaction, right?



 [1]
 http://dvcs.w3.org/hg/undomanager/raw-file/tip/undomanager.html#automatic-dom-transactions

 Best,
 Ryosuke Niwa
 Software Engineer
 Google Inc.





Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Ojan Vafai
On Thu, Jul 5, 2012 at 7:15 AM, Adam Barth w...@adambarth.com wrote:

 On Thu, Jul 5, 2012 at 1:37 AM, Olli Pettay olli.pet...@helsinki.fi
 wrote:
  On 07/05/2012 08:00 AM, Adam Barth wrote:
  On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi
  wrote:
  On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:
  So, it is very much implementation detail.
 
  (And I still don't understand how a callback can be so hard in this
 case.
  There are plenty of different kinds of callback objects.
new MutationObserver(some_callback_function_object) )
 
  I haven't tested, by my reading of the MutationObserver implementation
  in WebKit is that it leaks.  Specifically:
 
  MutationObserver --retains-- MutationCallback --retains--
  some_callback_function_object --retains-- MutationObserver
 
  I don't see any code that breaks this cycle.
 
  Ok. In Gecko cycle collector breaks the cycle. But very much an
  implementation detail.
 
  DOM events
 
  Probably EventListeners, not Events.
 
  have a bunch of delicate code to avoid break these
  reference cycles and avoid leaks.  We can re-invent that wheel here,
 
  Or use some generic approach to fix such leaks.
 
  but it's going to be buggy and leaky.
 
  In certain kinds of implementations.
 
  I appreciatie that these jQuery-style APIs are fashionable at the
  moment, but API fashions come and go.  If we use this approach, we'll
  need to maintain this buggy, leaky code forever.
 
  Implementation detail. Very much so :)

 Right, my point is that this style of API is difficult to implement
 correctly, which means authors will end up suffering low-quality
 implementations for a long time.

 On Thu, Jul 5, 2012 at 2:22 AM, Olli Pettay olli.pet...@helsinki.fi
 wrote:
  But anyhow, event based API is ok to me.
  In general I prefer events/event listeners over other callbacks.

 Great.  I'd recommend going with that approach because it will let us
 provide authors with high-quality implementations of the spec much
 sooner.


The downside of events is that they have a higher overhead than we
originally thought was acceptable for mutation events (e.g. just computing
the ancestor chain is too expensive). Now that we fire less frequently, the
overhead might be OK, but it's still not great IMO.

Only having a high-overhead option for any new APIs we add is problematic.
I appreciate the implementation complexity concern, but I think we just
need to make callbacks work.

We could fire the event on the MutationObserver itself. That would be
lightweight. That doesn't help though, right?

new MutationObserver().addEventListener('onMutation', function()
{}) vs. new MutationObserver().observe(function() {})


Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Ojan Vafai
On Thu, Jul 5, 2012 at 1:02 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Thu, Jul 5, 2012 at 12:45 PM, James Graham jgra...@opera.com wrote:

 On Thu, 5 Jul 2012, Ryosuke Niwa wrote:

 On Thu, Jul 5, 2012 at 8:08 AM, James Graham jgra...@opera.com **
 wrote:

   On 07/05/2012 12:38 AM, Ryosuke Niwa wrote:
   After this change, authors can write:
   scope.undoManager.transact(new AutomaticDOMTransaction{**function
 () {
scope.appendChild(foo);
   }, 'append foo'));

  [...]


   document.undoManager.transact(**new DOMTransaction({function ()
 {
// Draw a line on canvas
}, function () {
// Undraw a line
}, function () { this.execute(); },
'Draw a line'
   }));


 I think this proposed API looks particularly verbose and ugly. I
 thought we wanted to make new APIs more author
 friendly and less like refugees from Java-land.



 What makes you say so? If anything, you don't have to have labels like
 execute, undo, and redo. So it's less verbose. If you
 don't like the long name like AutomaticDOMTransaction, we can come up
 with a shorter name.


 I think having to call a particular constructor for an object that is
 just passed straight into a DOM function is verbose, and difficult to
 internalise (because you have to remember the constructor name and case and
 so on). I think the design with three positional arguments is much harder
 to read than the design with three explicitly named arguments implemented
 as object properties.


 But the alternative is having to remember labels like execute, undo, 
 redo.


In your version, you need to remember the order of the arguments, which
requires you looking it up each time. If we do decide to add the
DOMTransaction constructor back, we should keep passing it a dictionary as
it's argument. Or maybe take the label and a dictionary as arguments.


 Passing in objects containing one or more non-callback properties is also
 an increaingly common pattern, and we are trying to replace legacy APIs
 that took lots of positional arguments with options-object based
 replacements (e.g. init*Event). From the point of view of a javascript
 author there is no difference between something like {foo:true} and
 {foo:function(){}}. Insisting that there should be a difference in DOM APIs
 because of low-level implementation concerns is doing a disservice to web
 authors by increasing the impedence mismatch between the DOM and javascript.


 Having an explicit constructor has other advantages like making expando
 work. And we don't have to worry about authors modifying execute, undo, 
 redo after the fact because we can make them readonly (making them not
 readonly has some odd implications like the timing at which properties are
 changed matter).

 Also, DOM transaction object currently have two mutually exclusive
 functions executeAutomatic and execute, which define whether the
 transaction is automatic or not, and this has very weird consequences such
 as specifying both executeAutomatic and execute will result in an automatic
 transaction. Having an explicit constructor makes this interface much more
 clear and clean.

  On Thu, Jul 5, 2012 at 11:07 AM, Olli Pettay olli.pet...@helsinki.fi
 wrote:

   We shouldn't change the UndoManager API because of implementation
 issues, but if event based API ends up being
   better.


 I don't think it's reasonable to agree on an unimplementable design. In
 theory, mutation events can be implemented correctly
 but we couldn't, so we're moving on and getting rid of it.


 The current deisgn is not unimplementable, it's just slightly more work
 in WebKit than you would like. I don't think it's reasonable to reject good
 designs in favour of worse designs simply because the better design isn't a
 perfect fit for a single implementation


 I don't think the current interface-less object is a good design
 regardless of whether it could be implemented in WebKit or not.

 - Ryosuke




Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-04 Thread Ojan Vafai
On Wed, Jul 4, 2012 at 5:39 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Jul 4, 2012 5:26 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 
  On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:
 
  On Wed, Jul 4, 2012 at 5:00 PM, Olli Pettay 
  olli.pet...@helsinki.fimailto:
 olli.pet...@helsinki.fi wrote:
 
  On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:
 
  Hi all,
 
  Sukolsak has been implementing the Undo Manager API in WebKit
 but the fact undoManager.transact() takes a pure JS object with callback
  functions is
  making it very challenging.  The problem is that this object
 needs to be kept alive by either JS reference or DOM but doesn't have a
 backing C++
  object.  Also, as far as we've looked, there are no other
 specification that uses the same mechanism.
 
 
  I don't understand what is difficult.
  How is that any different to
  target.addEventListener(foo, { handleEvent: function() {}})
 
 
  It will be very similar to that except this object is going to have 3
 callbacks instead of one.
 
  The problem is that the event listener is a very special object in
 WebKit for which we have a lot of custom binding code. We don't want to
 implement a
  similar behavior for the DOM transaction because it's very error prone.
 
 
  So, it is very much implementation detail.
  (And I still don't understand how a callback can be so hard in this
 case. There are plenty of different kinds of callback objects.
   new MutationObserver(some_callback_function_object) )

 Yes. It's an implementation feedback. The mutation observer callback is
 implemented as a special event handler in WebKit.

How does this make implementation easier? I'm pretty sure Adam's concern
about increasing the likelihood of memory leaks due to the function objects
that are held on indefinitely by the undomanager. That concern is not
implementation specific and is not addressed by this change. The only thing
that would address Adam's concern that I can think of would be something
that didn't involve registering functions at all. I have trouble thinking
of a useful undomanager API that addresses his memory leak concern though.

With the dictionary API, we just need to spec it so that the input
dictionary is just a convenience instead of needing to pass arguments to
the transact method. We don't need the undomanager to hold on to the actual
object you pass in. If we do that, we don't gain anything by adding back in
DOMTransaction, do we?


  Since I want to make the API consistent with the rest of the
 platform and the implementation maintainable in WebKit, I propose the
 following
  changes:
 
 * Re-introduce DOMTransaction interface so that scripts can
 instantiate new DOMTransaction().
 * Introduce AutomaticDOMTransaction that inherits from
 DOMTransaction and has a constructor that takes two arguments: a function
 and an
  optional label
 
 
  After this change, authors can write:
  scope.undoManager.transact(new
 AutomaticDOMTransaction{__function () {
 
scope.appendChild(foo);
  }, 'append foo'));
 
 
  Looks somewhat odd. DOMTransaction would be just a container for a
 callback?
 
 
  Right. If we wanted, we can make DOMTransaction an event target and
 implement execute, undo,  redo as event listeners to further simplify the
 matter.
 
 
  That could make the code more consistent with rest of the platform, but
 the API would become harder to use.

 Why? What's harder in the new syntax?

It's more bloated and harder to read IMO.

  - Ryosuke



Re: [DOM4] Mutation algorithm imposed order on document children

2012-06-12 Thread Ojan Vafai
On Tue, Jun 12, 2012 at 10:48 AM, Elliott Sprehn espr...@gmail.com wrote:



 On Mon, Jun 11, 2012 at 9:17 PM, Boris Zbarsky bzbar...@mit.edu wrote:

  On 6/11/12 7:39 PM, Elliott Sprehn wrote:

 After discussing this with some other contributors there were questions
 on why we're enforcing the order of the document child nodes.


 Because otherwise serialization of the result would be ... very broken?


 Inserting doctype nodes has no effect on the mode of the document though,
 so it's already possible to produce a broken serialization (one in the
 wrong mode). For instance you can remove the doctype node and then
 serialize or swap the doctype node and then serialize.



  Can we leave the behavior when your document is out of order unspecified?


 You mean allow UAs to throw or not as they wish?  That seems like a
 pretty bad idea, honestly.  We should require that the insertion be allowed
 (and then specify what DOM it produces) or require that it throw.


We should specify it to be allowed IMO unless there is actually a valid
use-case.


  In practice I don't think anyone inserts these in the wrong order (or
 insert doctypes at all since they have no effect). If you wanted to
 dynamically create a document you'd do it with document.write('!DOCTYPE
 html') and then replaceChild the root element which was created for you.


I think you can make a stronger argument. It's extremely rare to create a
doctype and append it to a document at all since it doesn't affect the
compat mode. What's the use-case?

Boris, does appending a doctype to a document change compatMode in gecko in
some cases? I don't know of any effect it has in WebKit.


  Implementing this ordering restriction requires changing the append and
 replace methods substantially in Webkit for a case I'm not sure developers
 realize exists.

 - Elliott



www-dom vs public-webapps WAS: [DOM4] Mutation algorithm imposed order on document children

2012-06-12 Thread Ojan Vafai
This confusion seems to come up a lot since DOM is part of public-webapps
but uses a separate mailing list. Maybe it's time to reconsider that
decision? It's the editors of the specs who have the largest say here IMO.

Travis, Jacob, Ms2ger, Aryeh, Anne: How would feel about merging DOM
discussions back into public-webapps@?

Ojan

On Tue, Jun 12, 2012 at 5:15 AM, Arthur Barstow art.bars...@nokia.comwrote:

 Elliott, All - please use the www-...@w3.org list for DOM4 discussions 
 http://lists.w3.org/Archives/**Public/www-dom/http://lists.w3.org/Archives/Public/www-dom/
 .

 (Elliott, since that spec is still in the draft phase, you should probably
 use the latest Editor's Draft http://dvcs.w3.org/hg/**
 domcore/raw-file/tip/Overview.**htmlhttp://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html
 instead of the version in w3.org/TR/)

 -Thanks, AB

  Original Message 
 Subject:[DOM4] Mutation algorithm imposed order on document
 children
 Resent-Date:Tue, 12 Jun 2012 01:01:51 +
 Resent-From:public-webapps@w3.org
 Date:   Mon, 11 Jun 2012 16:39:36 -0700
 From:   ext Elliott Sprehn espr...@gmail.com
 To: public-webapps@w3.org



 I'm working on places where Webkit doesn't follow the DOM4 mutation
 algorithm and one of the bugs is not throwing an exception when a doctype
 node is inserted after an element in a document (or other permutations of
 the same situation).

 https://bugs.webkit.org/show_**bug.cgi?id=88682https://bugs.webkit.org/show_bug.cgi?id=88682
 http://www.w3.org/TR/domcore/#**mutation-algorithmshttp://www.w3.org/TR/domcore/#mutation-algorithms

 After discussing this with some other contributors there were questions on
 why we're enforcing the order of the document child nodes. Specifically
 since inserting a doctype node doesn't actually change the doctype so this
 situation is very unlikely (possibly never happens) in the wild. Not
 implementing this keeps the code simpler for a case that developers likely
 never see.

 Can we leave the behavior when your document is out of order unspecified?

 - Elliott




Re: Shrinking existing libraries as a goal

2012-05-16 Thread Ojan Vafai
In principle, I agree with this as a valid goal. It's one among many
though, so the devil is in the details of each specific proposal to balance
out this goal with others (e.g. keeping the platform consistent). I'd love
to see your list of proposals of what it would take to considerably shrink
jQuery.

On Tue, May 15, 2012 at 9:32 PM, Yehuda Katz wyc...@gmail.com wrote:

 In the past year or so, I've participated in a number of threads that were
 implicitly about adding features to browsers that would shrink the size of
 existing libraries.

 Inevitably, those discussions end up litigating whether making it easier
 for jQuery (or some other library) to do the task is a good idea in the
 first place.

 While those discussions are extremely useful, I feel it would be useful
 for a group to focus on proposals that would shrink the size of existing
 libraries with the implicit assumption that it was a good idea.

 From some basic experimentation I've personally done with the jQuery
 codebase, I feel that such a group could rather quickly identify enough
 areas to make a much smaller version of jQuery that ran on modern browsers
 plausible. I also think that having data to support or refute that
 assertion would be useful, as it's often made casually in meta-discussions.

 If there is a strong reason that people feel that a focused effort to
 identify ways to shrink existing popular libraries in new browsers would be
 a bad idea, I'd be very interested to hear it.

 Thanks so much for your consideration,

 Yehuda Katz
 jQuery Foundation
 (ph) 718.877.1325



Re: [webcomponents] Template element parser changes = Proposal for adding DocumentFragment.innerHTML

2012-05-11 Thread Ojan Vafai
On Thu, May 10, 2012 at 9:28 PM, Rafael Weinstein rafa...@google.comwrote:

 On Thu, May 10, 2012 at 4:19 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 10 May 2012, Rafael Weinstein wrote:
  On Thu, May 10, 2012 at 4:01 PM, Ian Hickson i...@hixie.ch wrote:
   On Fri, 11 May 2012, Tab Atkins Jr. wrote:
  
   But ok, let's assume that the use case is create an element and its
   subtree so that you can insert dynamically generated parts of an
   application during runtime, e.g. inserting images in a dynamically
   generated gallery [...]
 
  [...[ but here's one that comes to mind which is valid markup: What's
  the output for this
 
  myDocFrag.innerHTML = optionOneoptiontwooptionthree;
 
  My proposal would return a single option element with the value One.
 
  But the example here suggests a different use case. There are presumably
  three elements there, not one. If this is a use case we want to address,
  then let's go back to the use cases again: what is the problem we are
  trying to solve? When would you create a document fragment of some
  options, instead of just creating a select with options?

 BTW, for example

 In handlerbars,

 select
  {{# each optionListThatComeInPairs }}
option{{ firstThingInPair }}
option{{ secondThingInPair }}
  {{/ each }}
 /select

 Or equivalently, in MDV

 select
  template iterate=optionsListThatComeInPairs
option{{ firstThingInPair }}
option{{ secondThingInPair }}
  /template
 /select


To clarify, this doesn't suffer from the string concatenation problem that
Ian was worried about, right? {{ firstThingInPair }} is inserted as a
string, not HTML, right? Similarly, if you had 'data-foo={{ attributeValue
}}', it would be escaped appropriately so as to avoid any possibility of
XSS?

Ojan


Re: [webcomponents] Template element parser changes = Proposal for adding DocumentFragment.innerHTML

2012-05-10 Thread Ojan Vafai
On Thu, May 10, 2012 at 5:13 PM, Rafael Weinstein rafa...@google.comwrote:

 On Thu, May 10, 2012 at 4:58 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 10 May 2012, Rafael Weinstein wrote:
 
  Also, I'm curious why it's ok to peak at the first few characters of the
  string, and not ok to peak at the token stream until we see the first
  start tag?
 
  Because it's predictable and easy to debug. When you're dealing with a
  weird effect caused by some accidental markup hundreds of lines down a
  string, it's really hard to work out what's going on. When the effect is
  caused by the very first thing in the string, it's much easier to notice
  it. (You see this kind problem sometimes on Web pages where text/plain
  files are sent as text/html, or text files are slightly augmented with
  HTML without properly escaping everything -- they render fine until they
  get to something that accidentally looks like markup, and the parser does
  its stuff, and you wonder why half of the 100-page document is bold.)

 In the abstract, I actually agree with you, but this happens to be a
 case when this is effectively never going to be a problem. Just have a
 look at *any* templating langauge. Any time you see some kind of
 conditional, or loop construct, look at it's contents and imagine that
 that's what'll be passed to innerHTML here.

 99.9% of the time it's going to either be all character tokens, or
 whitespace followed by a start tag.

 You're letting an non-existent problem kill a perfectly useful proposal.

 I'm not a huge fan of everything jQuery does either, but regardless of
 it's objective goodness, it has already done the test by offering
 this functionality. The kind of bug your describing hasn't been
 observed at all. Someone with more jQuery-cred correct me if I'm
 wrong.


I think our best final solution is something roughly like
http://wiki.ecmascript.org/doku.php?id=harmony:quasis. Look at safehtml as
an example. If browsers provide that method, that strikes the best balance
of security and convenience.

That said, innerHTML is here. That it doesn't work on DocumentFragment just
makes people have to use dummy container elements and write less efficient
code. I don't think implementing this will have any effect on whether
people will use our eventual safehtml solution. In the interim, it's just
confusing and clunky that innerHTML works everywhere except
DocumentFragment. IMO it should work on Document as well.

As a web developer who has done a lot of work with both Element.create
style APIs and with jQuery string-based APIs, it's really much more of a
pleasure to use the latter. It makes for much more concise and readable
code. Those benefits are high enough that it justifies the clunkiness and
the need to be XSS-careful.

Template element has none of the issues of horrifying string-based
approaches and is only related to this discussion in that it needs
context-free parsing.

Ojan


Re: [editing] input event should have a data property WAS: [D3E] Where did textInput go?

2012-05-02 Thread Ojan Vafai
On Thu, Apr 5, 2012 at 6:19 AM, Aryeh Gregor a...@aryeh.name wrote:

 On Wed, Apr 4, 2012 at 10:07 PM, Ojan Vafai o...@chromium.org wrote:
  The original proposal to drop textInput included that beforeInput/input
  would have a data property of the plain text being inserted. Aryeh, how
 does
  that sound to you? Maybe the property should be called 'text'? 'data' is
  probably too generic.

 Sounds reasonable.  Per spec, the editing variant of these events has
 .command and .value.  I think .text is a good name for the plaintext
 version.  It should just have the value that the input/textarea would
 have if the beforeinput event isn't canceled.


I'd like this to be available for contentEditable as well. Is there any
benefit to restricting this to input/textarea?

As I've said before, I don't think command/value should be restricted to
contentEditable beforeInput/input events. I don't see any downside to
making command, value and text all available for all three cases. It
simplifies things for authors. The code they use for plaintext inputs can
be the same as for rich-text inputs.

Ojan


Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Ojan Vafai
script type=text/html works for string-based templating. Special handling
of /script is not a big enough pain to justify adding a template element.

For Web Components and template systems that want to do DOM based
templating (e.g. MDV), the template element can meet that need much better
than a string-based approach. If nothing else, it's more efficient (e.g. it
only parses the HTML once instead of for each instantiation of the
template).

String-based templating already works. We don't need new API for it.
DOM-based templating and Web Components do need new API in order to work at
all. There's no need, and little benefit, for the template element to try
to meet both use-cases.

Ojan

On Wed, Apr 25, 2012 at 3:20 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Wed, Apr 25, 2012 at 3:17 PM, Brian Kardell bkard...@gmail.com wrote:
  And when that becomes the case, then using the source text becomes
  problematic not just less efficient right?

 Yes, for exactly the reasons you can't nest scripts.

 ~TJ




Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Ojan Vafai
On Wed, Apr 25, 2012 at 4:22 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Wed, Apr 25, 2012 at 3:31 PM, Ojan Vafai o...@chromium.org wrote:
  script type=text/html works for string-based templating. Special
 handling
  of /script is not a big enough pain to justify adding a template
 element.
 
  For Web Components and template systems that want to do DOM based
 templating
  (e.g. MDV), the template element can meet that need much better than a
  string-based approach. If nothing else, it's more efficient (e.g. it only
  parses the HTML once instead of for each instantiation of the template).
 
  String-based templating already works. We don't need new API for it.
  DOM-based templating and Web Components do need new API in order to work
 at
  all. There's no need, and little benefit, for the template element to
 try to
  meet both use-cases.

 String-based templating *doesn't* work unless you take pains to make
 it work.  This is why jQuery has to employ regex hacks to make
 $('tdfoo/td') work the way you'd expect.  Fixing that in the
 platform is a win, so authors don't have to ship code down the wire to
 deal with this (imo quite reasonable) use-case.

 When you want to do DOM-based templating, such as for Components or
 MDV, you run into the *exact same* problems as the above, where you
 may want to template something that, in normal HTML parsing, expects
 to be in a particular context.  Solving the problem once is nice,
 especially since we get to kill another problem at the same time.  We
 aren't even compromising - this is pretty much exactly what we want
 for full DOM-based templating.


I agree with everything your saying, but I think you're mixing up two
different problems here. What you're listing above is not what I'm calling
templating. I would call that programmatic DOM creation. We also have to
solve that problem and, coincidentally, it will also require the same
parsing rules as the ones we're suggesting for template. When I'm talking
about templating, I'm talking about something that involves iteration and
data-binding, e.g. script type=text/htmldiv{{ myBoundVariable
}}/div/script.

Ojan


[editing] input event should have a data property WAS: [D3E] Where did textInput go?

2012-04-04 Thread Ojan Vafai
BCC: www-dom
CC: public-webapps

The original proposal to drop textInput included that beforeInput/input
would have a data property of the plain text being inserted. Aryeh, how
does that sound to you? Maybe the property should be called 'text'? 'data'
is probably too generic.

On Wed, Apr 4, 2012 at 11:01 AM, Andrew Oakley and...@ado.is-a-geek.netwrote:

 On Wed, 04 Apr 2012 18:58:49 +0200
 Anne van Kesteren ann...@opera.com wrote:

  On Wed, 04 Apr 2012 18:54:46 +0200, Andrew Oakley
  and...@ado.is-a-geek.net wrote:
   The textInput event seems to have been removed from the latest versions
   of DOM 3 Events but I can't find any real explanation as to why it
   disappeared, or if it has been replaced by anything.
 
  See https://www.w3.org/Bugs/Public/show_bug.cgi?id=12958 Conclusion was
  emailed to this list in
  http://lists.w3.org/Archives/Public/www-dom/2012JanMar/0225.html

 Thanks, I must have missed that.  The reason seems to be incompatible
 implementations and that other events can be used for the same thing.

 How would content now determine what text was input?  The input event
 in HTML5 appears to just be a simple event, and the keyboard events are
 obviously not sufficient (copy+paste being the most obvious thing that
 could be missed but there are also the usual IME issues).

 I realise this might be possible with the editing APIs, but that seems
 excessively complicated (and not supported by any browsers as far as I
 know) way of getting some fairly basic information.

 --
 Andrew Oakley




Re: Where should UA insert text when the focus is changed in a keypress event handler?

2012-03-20 Thread Ojan Vafai
With my web developer hat on, I would expect the WebKit/IE behavior.
Keypress is fired before the DOM is modified (I tested in Gecko and WebKit
on an input element). As such, I'd expect focus changed during a keypress
event to change where the text is inserted. Notably, Gecko does the
WebKit/IE behavior if you use keydown instead of keypress. I don't see any
reason keypress should be different from keydown.

On Tue, Mar 20, 2012 at 10:54 AM, Ryosuke Niwa rn...@webkit.org wrote:

 Hi,

 We're trying to figure out inside which element the editing operation must
 be done when a keypress event handler changes the focused element /
 selection for https://bugs.webkit.org/show_bug.cgi?id=81661.

 Should it be done at wherever focus is after keypress event is dispatched?
 Or whatever keypress event's target was?

 DOM level 3 events doesn't seem to specify this behavior:

 http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html#event-type-keypress

 According to a fellow WebKit contributor, WebKit and Internet Explorer use
 the current focused element whereas Firefox uses the event target.

 Best,
 Ryosuke Niwa
 Software Engineer
 Google Inc.





Re: [DOM4] Question about using sequenceT v.s., NodeList for

2012-03-16 Thread Ojan Vafai
The main reason to keep NodeList is because we'd like to add other APIs to
NodeList in the future that operate on the Nodes in the list (e.g. remove).
I don't really see use-cases for wanting a similar thing with the other
cases here, so I think sticking with arrays of Ranges and Regions is fine.

On Fri, Mar 16, 2012 at 6:56 AM, Vincent Hardy vha...@adobe.com wrote:


 On Mar 16, 2012, at 1:40 AM, Anne van Kesteren wrote:

 On Fri, 16 Mar 2012 01:59:48 +0100, Vincent Hardy vha...@adobe.com
 wrote:

 b. Use sequenceT everywhere except where T=Node, in which case we

 would use NodeList. This is consistent with DOM4 and inconsistent within

 the spec.


 I think this is fine, but you should use Range[] and not sequenceRange.
 You cannot use sequence for attributes. Do you have a pointer to the
 specification by the way? Kind of curious why you want to expose a list of

 Range objects.


 Hi Anne,

 The proposal is at http://wiki.csswg.org/spec/css3-regions/css-om. The
 proposed modifications are not in the specification yet, they need more
 discussions.

 The proposal is to use sequenceRange as returned values from functions,
 not as attribute values, which is why I went with sequenceT and not T[]
 (I had a  brief exchange with Cameron about this).

 what I proposed as option b. in my message would be:

 interface Region {
 readonly attribute DOMString flowConsumed;
 sequenceRange getRegionFlowRanges(); // Returns a static list, new 
 array returned on each call
 };

 interface NamedFlow {
   readonly attribute DOMString name;
   readonly attribute boolean overflow;

   sequenceRegion getRegions();  // Returns a static list, new array 
 returned on each call
   NodeList getContentNodes(); // Returns a static list, new array returned on 
 each call
   sequenceRegion getRegionsByContentNode(Node node); // idem
 };


 The alternate options on the page are:

 *Option I*


 interface Region {
 readonly attribute DOMString flowConsumed;
 sequenceRange getRegionFlowRanges();
 };


 interface NamedFlow {
   readonly attribute DOMString name;
   readonly attribute boolean overflow;

   sequenceRegion getRegions();
   sequenceNode getContentNodes();
   sequenceRegion getRegionsByContentNode(Node node);
 };


 Or: *Option II*


 [ArrayClass]
 interface RangeList {
   getter Range? item(unsigned long index);
   readonly attribute unsigned long length;
 };

 [ArrayClass]
 interface RegionList {
   getter Region? item(unsigned long index);
   readonly attribute unsigned long length;
 };

 interface Region {
 readonly attribute DOMString flowConsumed;
 RangeList getRegionFlowRanges();
 };

 interface NamedFlow {
   readonly attribute DOMString name;
   readonly attribute boolean overflow;

   RegionList getRegions();
   NodeList getContentNodes();
   RegionList getRegionsByContentNode(Node node);
 };

 The reason we are using an array of ranges as opposed to a single Range
 object is that the named flow is made of a a list of elements that do not
 share a common parent. So for example, we can have a sequence of elemA and
 then elemB in the named flow, but they do not have the same parent. When
 they are laid out across regions, say region1 and region2, we may get all
 of elemA in region1 and some of elemB. In that case, the ranges for region1
 would be:

 - first range that encompasses all of elemA
 - second range that encompasses some of elemB

 and for region2:

 - range that encompasses the remainder of elemB (i.e, the start container
 and offset on this range are the same as the end container and end offset
 on the last range in region1).

 Does that answer your question?
 Vincent





Re: [DOM4] NodeList should be deprecated

2012-03-16 Thread Ojan Vafai
On Wed, Mar 14, 2012 at 5:32 AM, Anne van Kesteren ann...@opera.com wrote:

 On Wed, 14 Mar 2012 09:03:23 +0100, Cameron McCormack c...@mcc.id.au
 wrote:

 Anne van Kesteren:

 Wasn't there a compatibility constrain with doing that?


 I don't remember -- the only difference it would make is that
 Object.getPrototypeOf(**NodeList.prototype) == Array.prototype.


 Okay, annotated NodeList with [ArrayClass]. What about HTMLCollection?
 Should I add it there too? Could you take a look at NodeList and
 HTMLCollection for accuracy?


Should HTMLCollection inherit from NodeList? All the APIs I can think of
that you'd want to add to NodeList you'd want on HTMLCollection as well.

To be clear, array methods that modify the array in-place would silently do
nothing because the properties on NodeLists/HTMLCollections are read-only.
But you'd be able to use things like forEach and filter.


Re: [DOM4] NodeList should be deprecated

2012-03-13 Thread Ojan Vafai
Upon further thought, I take this suggestion back. Static NodeList as it
currently exists is just an underpowered array, but that doesn't mean
that's what it always has to be. In the future, we should add methods to
NodeList that operate on Nodes, e.g. add a remove method to NodeList that
call remove on all the Nodes in the NodeList. Also, in theory, browser may
be able to optimize common cases of NodeLists (e.g. cache frequently
accessed NodeLists).

We should make static NodeList inherit from Array though so that you can do
regular array operations on it.

On Tue, Mar 13, 2012 at 5:59 AM, Rick Waldron waldron.r...@gmail.comwrote:


 On Mar 13, 2012, at 4:29 AM, Anne van Kesteren ann...@opera.com wrote:

  On Mon, 12 Mar 2012 21:06:00 +0100, Rick Waldron waldron.r...@gmail.com
 wrote:
  On Mar 12, 2012, at 3:06 PM, Anne van Kesteren ann...@opera.com
 wrote:
  On Mon, 12 Mar 2012 19:07:31 +0100, Rick Waldron 
 waldron.r...@gmail.com wrote:
  The NodeList item() method is a blocker.
 
  Blocker in what way?
 
  As I've always understood it - the item() method is what differentiates
 NodeList from Array and blocks it from being just an array.
 
  Is this incorrect?
 
  I think there is more, such as arrays being mutable, but the suggestion
 was to change two accessors from mutation observers to return platform
 array objects rather than NodeLists, which is a change we can still make
 given that mutation observers is not widely deployed yet.

 I that case, very cool. Thanks for the clarification.

 
 
  --
  Anne van Kesteren
  http://annevankesteren.nl/



[DOM4] NodeList should be deprecated

2012-03-08 Thread Ojan Vafai
Dynamic NodeLists have a significant memory and performance cost. Static
NodeLists are basically just under-powered arrays. We should just return
Node arrays from any new APIs that return a list of Nodes. I'd like to see
NodeList get similar treatment to hasFeature, i.e. a note that it not be
used for new APIs and possibly even the explicitly list the APIs allowed to
return them.

I don't see the Dynamic/Static distinction in DOM4 or the HTML spec. Is
this specified anywhere?

For reference, this came up in WebKit with some new Regions APIs
https://bugs.webkit.org/show_bug.cgi?id=80638.


Re: April face-to-face meetings for WebApps

2012-02-07 Thread Ojan Vafai
On Tue, Feb 7, 2012 at 9:28 AM, Dimitri Glazkov dglaz...@google.com wrote:

 On Tue, Feb 7, 2012 at 5:34 AM, Anne van Kesteren ann...@opera.com
 wrote:
  On Tue, 07 Feb 2012 13:55:59 +0100, Arthur Barstow 
 art.bars...@nokia.com
  wrote:
 
  I am especially interested in whether Editors and Test
  Facilitators/Contributors will attend.
 
 
  Highly likely I'll attend.

 Me too.


Same.


Re: [File API] Draft for Review

2012-01-26 Thread Ojan Vafai
On Thu, Jan 26, 2012 at 4:42 PM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, Jan 26, 2012 at 6:25 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 As I argued in 
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1520.html,
 we should absolutely *not* be adding more boolean arguments to the
 platform.  They should be exposed as boolean properties in an
 dictionary.


 I don't find this compelling; you could make the same argument for any
 optional argument.  When you only have a couple arguments, the additional
 verbosity is a loss, particularly if the argument is used frequently.


Depends on the argument type. For example, in document.createElement('div')
it's pretty obvious that the string is the tagname. Whereas in
node.cloneNode(true) it's not clear without looking at the signature of
cloneNode whether true means deep clone or shallow clone.
node.cloneNode({deep:true}) or node.cloneNode('deep') are clear without
looking up the signature.

That said, I sympathize that the overhead of creating an object or needing
to do a string compare just for a boolean is kind of sucky.


Re: Obsolescence notices on old specifications, again

2012-01-24 Thread Ojan Vafai
Can we just compromise on the language here? I don't think we'll find
agreement on the proper way to do spec work.

How about: DOM2 is no longer updated. DOM4 is the latest actively
maintained version. link to DOM4

On Tue, Jan 24, 2012 at 11:43 AM, Glenn Adams gl...@skynav.com wrote:

 I'm sorry, but for some, saying DOM2 (a REC) = DOM4 (a WIP), is the same
 as saying DOM2 is a WIP. This is because the former can be read as saying
 that the normative content of DOM2 is now replaced with DOM4.

  I'm not sure what you mean by [DOM2] is a work on which progress has
 stopped. DOM2 is a REC, and is only subject to errata [1] and rescinding
 [2].

 [1] http://www.w3.org/2005/10/Process-20051014/tr.html#rec-modify
 [2] http://www.w3.org/2005/10/Process-20051014/tr.html#rec-rescind

 I'm not sure where the proposed obsolescence message falls in terms of [1]
 or [2]. Perhaps you could clarify, since presumably the process document
 will apply to any proposed change.

 On Tue, Jan 24, 2012 at 12:36 PM, Ms2ger ms2...@gmail.com wrote:

 On 01/24/2012 08:33 PM, Glenn Adams wrote:

 The problem is that the proposal (as I understand it) is to insert
 something like:

 DOM2 (a REC) is obsolete. Use DOM4 (a work in progress).

 This addition is tantamount (by the reading of some) to demoting the
 status
 of DOM2 to a work in progress.


 Not at all; it's a work on which progress has stopped long ago.





Re: Obsolescence notices on old specifications, again

2012-01-24 Thread Ojan Vafai
On Tue, Jan 24, 2012 at 11:50 AM, Glenn Adams gl...@skynav.com wrote:


 On Tue, Jan 24, 2012 at 12:39 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 24 Jan 2012, Glenn Adams wrote:
 
  The problem is that the proposal (as I understand it) is to insert
  something like:
 
  DOM2 (a REC) is obsolete. Use DOM4 (a work in progress).
 
  This addition is tantamount (by the reading of some) to demoting the
  status of DOM2 to a work in progress.

 It should be:

 DOM2 (a stale document) is obsolete. Use DOM4 (a work that is actively
 maintained).


 It would be more accurate perhaps to say that DOM4 is a work that is
 under active development. In the minds of most readers, maintenance is
 an errata process that follows completion (REC status).


I don't think the distinctions you are making here really matter. How
about: DOM2 is no longer updated. DOM4 is under active development. link
to DOM4.

It doesn't demote DOM2 to a work in progress, because a work in
 progress is a step _up_ from where DOM2 is now.


 Many (most?) government, industry, and business activities that formally
 utilize W3C specifications would view a work in progress as less mature
 than a REC. That results in the form being assigned a lower value than the
 latter. So, yes, demote is the correct word.


You keep saying this throughout this thread without pointing to specifics.
It's impossible to argue with broad, sweeping generalizations like this. So
far, you have yet to point to one concrete organization/statute that cares
about REC status.


Re: Obsolescence notices on old specifications, again

2012-01-23 Thread Ojan Vafai
I support adding warnings. As far as I know, all major browser vendors are
using the more modern version of each of these specs for implementation
work. That's certainly true for WebKit at least. It doesn't help anyone to
look at outdated specs considering them to be the latest and greatest when
the vast majority of implementations no longer match them.

Ojan

On Mon, Jan 23, 2012 at 10:06 AM, Glenn Adams gl...@skynav.com wrote:

 I object to adding such notice until all of the proposed replacement specs
 reach REC status.

 G.


 On Mon, Jan 23, 2012 at 2:01 AM, Ms2ger ms2...@gmail.com wrote:

 Hi all,

 The recent message to www-dom about DOM2HTML [1] made me realize that we
 still haven't added warnings to obsolete DOM specifications to hopefully
 avoid that people use them as a reference.

 I propose that we add a pointer to the contemporary specification to the
 following specifications:

 * DOM 2 Core (DOM4)
 * DOM 2 Views (HTML)
 * DOM 2 Events (D3E)
 * DOM 2 Style (CSSOM)
 * DOM 2 Traversal and Range (DOM4)
 * DOM 2 HTML (HTML)
 * DOM 3 Core (DOM4)

 and a recommendation against implementing the following specifications:

 * DOM 3 Load and Save
 * DOM 3 Validation

 Hearing no objections, I'll try to move this forward.

 Ms2ger

 [1] 
 http://lists.w3.org/Archives/**Public/www-dom/2012JanMar/**0011.htmlhttp://lists.w3.org/Archives/Public/www-dom/2012JanMar/0011.html





Re: Selection of a document that doesn't have a window

2012-01-12 Thread Ojan Vafai
Can you do anything useful with a selection on a document that doesn't have
a window? If so, the IE9 behavior makes sense. If not, I prefer the WebKit
behavior.

For phrasing it, could you define it in terms of document.defaultView? In
other words that document.getSelection is just return document.defaultView
? document.defaultView.getSelection() : null.

On Thu, Jan 12, 2012 at 7:58 AM, Aryeh Gregor a...@aryeh.name wrote:

 What does document.implementation.createHTMLDocument().getSelection()
 return?

 * IE9 returns a Selection object unique to that document.
 * Firefox 12.0a1 and Opera Next 12.00 alpha return the same thing as
 document.getSelection().
 * Chrome 17 dev returns null.

 I prefer IE's behavior just for the sake of simplicity.  If we go with
 Gecko/WebKit/Opera, we have to decide how to identify which documents
 get their own selections and which don't.  The definition should
 probably be something like documents that are returned by the
 .document property of some window, but I have no idea if that's a
 sane way to phrase it.

 So should the spec follow IE?  If not, what definition should we use
 to determine which documents get selections?




Re: [editing] tab in an editable area WAS: [whatwg] behavior when typing in contentEditable elements

2012-01-11 Thread Ojan Vafai
On Wed, Jan 11, 2012 at 8:15 AM, Aryeh Gregor a...@aryeh.name wrote:

 On Tue, Jan 10, 2012 at 4:48 PM, Charles Pritchard ch...@jumis.com
 wrote:
  Historically, one of my biggest frustrations with contentEditable is that

 you have to take it all or none. The lack of configurability is
 frustrating
  as a developer. Maybe the solution is to come up with a lower level set
 of
  editing primitives in place of contentEditable instead of trying to
 extend
  it though.

 Yes, that's definitely something we need to do.  There are algorithms
 I've defined that would probably be really useful to web authors, like
 wrap a list of nodes or some version of set the value of the
 selection (= inline formatting algorithm).  I've been holding off on
 exposing these to authors because I don't know if these algorithms are
 correct yet, and I don't want implementers jumping the gun and
 exposing them before using them internally so they're well-tested.  I
 expect they'll need to be refactored a bunch once implementers try
 actually reimplementing their editing commands in terms of them, and
 don't want to break them for authors when that happens.


Yup. Make sense. I agree that with editing we're not at a point where it's
at all clear what a good lower-level API would be.


Re: [editing] tab in an editable area WAS: [whatwg] behavior when typing in contentEditable elements

2012-01-10 Thread Ojan Vafai
On Tue, Jan 10, 2012 at 12:30 PM, Aryeh Gregor a...@aryeh.name wrote:

 On Fri, Jan 6, 2012 at 10:12 PM, Ojan Vafai o...@chromium.org wrote:
  We should make this configurable via execCommand:
  document.execCommand(TabBehavior, false, bitmask);

 I'm leery of global flags like that, because they mean that if you
 have two editors on the same page, they can interfere with each other
 unwittingly.  useCss/styleWithCss is bad enough; I've seen author code
 that just sets useCss or styleWithCss before every single command in
 case something changed it in between.

 Could the author just intercept the keystroke and run
 document.execCommand(indent) themselves?  It's not as convenient, I
 admit.  Alternatively, perhaps the flag could be set per editing host
 somehow, and only function when that editing host has focus, although
 I'm not sure what API to use.


I agree the API is not the best. We should put execCommand, et. al. on
Element. That would solve the global flag thing for useCss/styleWithCss as
well. It's also more often what a website actually wants. They have a
toolbar associated with each editing host. They don't want a click on the
toolbar to modify content in a different editing host. This is a change we
should make regardless of what we decide for tabbing behavior IMO.

Calling indent doesn't actually match tabbing behavior (e.g. inserting a
tab/spaces or, in a table cell, going to the next cell), right? I guess
another way we could approach this is to add document.execCommand('Tab')
that does the text-editing tabbing behavior. I'd be OK with that (the
command name could probably be better).


  The bitmask is because you might want a different set of behaviors:
  -Tabbing in lists
  -Tabbing in table cells
  -Tabbing blockquotes
  -Tab in none of the above insert a tab
  -Tab in none of the above insert X spaces (X is controlled by the CSS
  tab-size property?)

 Bitmasks are bad -- many JavaScript authors don't understand binary
 well, if at all.  Also, what are use-cases where you'd want to toggle
 indentation in all these cases separately?  More complexity without
 supporting use-cases is a bad idea -- browsers have enough trouble
 being interoperable as it stands, and more complexity just makes it
 harder.


The bitmask is not a great idea, but there are certainly editors that would
want tabbing in lists to work, but tab outside of lists to do the normal
web tabbing behavior. Maybe you're right that we should just have one
toggle though and if you want something more specific you do it in JS.

Historically, one of my biggest frustrations with contentEditable is that
you have to take it all or none. The lack of configurability is frustrating
as a developer. Maybe the solution is to come up with a lower level set of
editing primitives in place of contentEditable instead of trying to extend
it though.

Ojan


Re: [XHR] responseType json

2012-01-06 Thread Ojan Vafai
On Fri, Jan 6, 2012 at 12:18 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 1/6/12 12:13 PM, Jarred Nicholls wrote:

 WebKit is used in many walled garden environments, so we consider these
 scenarios, but as a secondary goal to our primary goal of being a
 standards compliant browser engine.  The point being, there will always
 be content that's created solely for WebKit, so that's not a good
 argument to make.  So generally speaking, if someone is aiming to create
 content that's x-browser compatible, they'll do just that and use the
 least common denominators.


 People never aim to create content that's cross-browser compatible per se,
 with a tiny minority of exceptions.

 People aim to create content that reaches users.

 What that means is that right now people are busy authoring webkit-only
 websites on the open web because they think that webkit is the only UA that
 will ever matter on mobile.  And if you point out this assumption to these
 people, they will tell you right to your face that it's a perfectly
 justified assumption.  The problem is bad enough that both Trident and
 Gecko have seriously considered implementing support for some subset of
 -webkit CSS properties.  Note that people here includes divisions of
 Google.

 As a result, any time WebKit deviates from standards, that _will_ 100%
 guaranteed cause sites to be created that depend on those deviations; the
 other UAs then have the choice of not working on those sites or duplicating
 the deviations.

 We've seen all this before, circa 2001 or so.

 Maybe in this particular case it doesn't matter, and maybe the spec in
 this case should just change, but if so, please argue for that, as the rest
 of your mail does, not for the principle of shipping random spec violations
 just because you want to.   In general if WebKit wants to do special
 webkitty things in walled gardens that's fine.  Don't pollute the web with
 them if it can be avoided.  Same thing applies to other UAs, obviously.


I'm ambivalent about whether we should restrict to utf8 or not. On the one
hand, having everyone on utf8 would greatly simplify the web. On the other
hand, I can imagine this hurting download size for japanese/chinese
websites (i.e. they'd want utf-16).

I agree with Boris that we don't need to pollute the web if we want to
expose this to WebKit's walled-garden environments. We have mechanisms for
exposing things only to those environments specifically to avoid this
problem. Lets keep this discussion focused on what's best for the web. We
can make WebKit do the appropriate thing.


Re: Pressing Enter in contenteditable: p or br or div?

2012-01-06 Thread Ojan Vafai
BCC: whatwg, CC:public-webapps since discussion of the editing spec has
moved

I'm OK with this conclusion, but I still strongly prefer div to be the
default single-line container name. Also, I'd really like the default
single-line container name to be configurable in some way. Different apps
have different needs and it's crappy for them to have to handle enter
themselves just to get a different block type on enter.

Something like document.execCommand(DefaultBlock, false, tagName). What
values are valid for tagName are open to discussion. At a minimum, I'd want
to see div, p and br. As one proof that this is valuable, the Closure
editor supports these three with custom code and they are all used in
different apps. I'm tempted to say that any block type should be allowed,
but I'd be OK with starting with the tree above. For example, I could see a
use-case for li if you wanted an editable widget that only contained a
single list.

Ojan

On Mon, May 30, 2011 at 1:16 PM, Aryeh Gregor simetrical+...@gmail.comwrote:

 On Thu, May 12, 2011 at 4:28 PM, Aryeh Gregor simetrical+...@gmail.com
 wrote:
  Behavior for Enter in contenteditable in current browsers seems to be
  as follows:
 
  * IE9 wraps all lines in p (including if you start typing in an
  empty text box).  If you hit Enter multiple times, it inserts empty
  ps.  Shift-Enter inserts br.
  * Firefox 4.0 just uses br _moz_dirty= for Enter and Shift-Enter,
  always.  (What's _moz_dirty for?)
  * Chrome 12 dev doesn't wrap a line when you start typing, but when
  you hit Enter it wraps the new line in a div.  Hitting Enter
  multiple times outputs divbr/div, and Shift-Enter always inserts
  br.
  * Opera 11.10 wraps in p like IE, but for blank lines it uses
  pbr/p instead of just p/p (they render the same).
 
  What behavior do we want?

 I ended up going with the general approach of IE/Opera:


 http://aryeh.name/spec/editcommands/editcommands.html#additional-requirements

 It turns out WebKit and Opera make the insertParagraph command behave
 essentially like hitting Enter, so I actually wrote all the
 requirements there (IE's and Firefox's behavior for insertParagraph
 was very different and didn't seem useful):


 http://aryeh.name/spec/editcommands/editcommands.html#the-insertparagraph-command

 The basic idea is that if the cursor isn't wrapped in a single-line
 container (address, dd, div, dt, h*, li, p, pre) then the current line
 gets wrapped in a p.  Then the current single-line container is
 split in two, mostly.  Exceptions are roughly:

 * For pre and address, insert a br instead of splitting the element.
  (This matches Firefox for pre and address, and Opera for pre but not
 address.  IE/Chrome make multiple pres/addresses.)
 * For an empty li/dt/dd, destroy it and break out of its container, so
 hitting Enter twice in a list breaks out of the list.  (Everyone does
 this for li, only Firefox does for dt/dd.)
 * If the cursor is at the end of an h* element, make the new element a
 p instead of a header.  (Everyone does this.)
 * If the cursor is at the end of a dd/dt element, it switches to dt/dd
 respectively.  (Only Firefox does this, but it makes sense.)

 Like the rest of the spec, this is still a rough draft and I haven't
 tried to pin corner cases down yet, so it's probably not advisable to
 try implementing it yet as written.  As always, you can see how the
 spec implementation behaves for various input by looking at
 autoimplementation.html:

 http://aryeh.name/spec/editcommands/autoimplementation.html#insertparagraph



[editing] tab in an editable area WAS: [whatwg] behavior when typing in contentEditable elements

2012-01-06 Thread Ojan Vafai
BCC: whatwg, CC:public-webapps since discussion of the editing spec has
moved

On Tue, Jun 14, 2011 at 12:54 PM, Aryeh Gregor simetrical+...@gmail.comwrote:

 You suggest that the tab key in browsers should act like indent, as in

dedicated text editors.  This isn't tenable -- it means that if you're
 using Tab to cycle through focusable elements on the page, as soon as
 it hits a contenteditable area it will get stuck and start doing
 something different.  No current browser does this, for good reason.


There are strong use-cases for both. In an app like Google Docs you
certainly want tab to act like indent. In a mail app, it's more of a
toss-up. In something like the Google+ sharing widget, you certainly want
it to maintain normal web tabbing behavior. Anecdotally, gmail has an
internal lab to enable document-like tabbing behavior and it is crazy
popular. People gush over it.

We should make this configurable via execCommand:
document.execCommand(TabBehavior, false, bitmask);

The bitmask is because you might want a different set of behaviors:
-Tabbing in lists
-Tabbing in table cells
-Tabbing blockquotes
-Tab in none of the above insert a tab
-Tab in none of the above insert X spaces (X is controlled by the CSS
tab-size property?)

Ojan


Re: before/after editaction

2012-01-04 Thread Ojan Vafai
On Thu, Oct 20, 2011 at 7:02 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Thu, Oct 20, 2011 at 6:57 PM, Ryosuke Niwa rn...@webkit.org wrote:

 I don't think we can make such an assumption. People mutate DOM on input
 event all the time:
 http://codesearch.google.com/#search/q=%20oninput=type=cs

 Including any DOM mutations in the on-going transaction would mean that
 UA will end up trying to revert those changes since the entire document
 shares the one undo scope by default, and may end up mutation DOM in
 unexpected ways.


 I'll add that, most significantly, when reverting DOM changes made in the
 input event listener fails, the UA may end up aborting the undo/redo
 process before even get to revert the actual DOM changes made by the user
 editing action or execCommand.


I do think sites will do that, but I expect that in practice the DOM
changes they are making are related to whatever the input was and will undo
fine.

Having just beforeInput/input will be better than having
input/beforeeditaction/aftereditaction (and possibly still beforeInput). We
should strive for that end result if we can achieve it.

It makes the input event more useful in ways that meet more use-cases than
just aftereditaction. At the same time it avoids growing the platform
unnecessarily with more events to make sense of.


Re: [FileAPI] Remove readAsBinaryString?

2011-12-18 Thread Ojan Vafai
What's the point of having deprecated features in a spec? If the purpose of
a specification is to get interoperability, then a UA either does or
doesn't need to implement the feature. There's no point in keeping a
feature that we think should be killed and there's no point in removing a
feature that can't be killed because too much web content relies on it.

DOM4 does mark some things as historical, but DOM4's use of historical
is different than deprecating it in a subtle but important way. The
historical bits in DOM4 will still need to be implemented by all UAs, but
the features they correspond to won't (e.g. enums for node types that we're
killing are kept).

On Fri, Dec 16, 2011 at 8:42 AM, Arun Ranganathan
aranganat...@mozilla.comwrote:

 I'm happy to remove this from the specification.  Right now this is marked
 as deprecated, which I suppose isn't strong enough discouragement?  :)

 - Original Message -
  Another topic that came up at TPAC was readAsBinaryString [1]. This
  method
  predates support for typed arrays in the FileAPI and allows binary
  data
  to be read and stored in a string. This is an inefficient way to store
  data now that we have ArrayBuffer and we'd like to not support this
  method.
 
  At TPAC I proposed that we remove readAsBinaryString from the spec and
  there was some support for this idea. I'd like to propose that we
  change
  the spec to remove this.
 
  Thanks,
 
  Adrian.
 
  [1] http://dev.w3.org/2006/webapi/FileAPI/#readAsBinaryString




Re: XPath and find/findAll methods

2011-11-28 Thread Ojan Vafai
On Fri, Nov 25, 2011 at 1:03 AM, Lachlan Hunt lachlan.h...@lachy.id.auwrote:

 On 2011-11-24 14:49, Robin Berjon wrote:

 So, now for the money question: should we charter this?


 Only if someone is volunteering to be the editor and drafts a spec.


Every task we take on in the working group has a cost. It makes it more
difficult to focus on other features and specs we want to see happen. I
would prefer that we focus on making css selectors richer instead of
extending xpath. I don't have new arguments to make beyond what's already
been said.

My strong preference would be that we not take on this work, but I won't
block it happening if someone is motivated to be the editor for it. I don't
expect there to be much interest from WebKit/Chromium to implement this
though.


Re: Adding methods to Element.prototype WAS: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?

2011-11-22 Thread Ojan Vafai
+ian since this wording is actually in the HTML spec.

I'm not sure how exactly this should be specced. DOM4 could specify the two
interfaces and HTML could use those definitions?

On Mon, Nov 21, 2011 at 7:05 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/21/11 9:58 PM, Ojan Vafai wrote:

 I think this is the only sane solution to this problem. Lets split up
 the Element interface. I'm not attached to these names, but something
 like ElementExposed and Element. Element inherits (mixins?)
 ElementExposed and only the methods on ElementExposed are exposed to the
 on* lookup chain.


 This is somwhat backwards.  In particular, expandos almost certainly need
 to be exposed on the on* lookup chain for backwards compat.  So more
 precisely, only the properties and methods that are NOT on ElementExposed
 (nor on any other DOM APIs elements implement at the moment) are missing
 from the on* lookup chain.  I agree that all new methods and properties we
 add should go in this set.  How to structure this in spec terms is a good
 question.


Hmmm. I wasn't thinking of expandos. We'll definitely need to keep
supportting that. :(


  ElementExposed would have everything that is currently on the Element
 API and all new methods would go on Element. You could imagine that over
 time we could figure out the minimal set of APIs required by web compat
 to leave on ElementExposed and move everything else to Element.


 This exercise doesn't seem to be worthwhile.  What would be the benefit?


The fewer properties that are exposed this way, the smaller the quirk is. I
was hoping that we could have a fixed small list of properties that the
spec says are exposed. Maybe that's too ambitious and doesn't actually buy
us much though.


  In fact, we should do this for form and document as well.


 Yes.

 -Boris



Re: Adding methods to Element.prototype

2011-11-22 Thread Ojan Vafai
On Tue, Nov 22, 2011 at 5:28 AM, Anne van Kesteren ann...@opera.com wrote:

 On Tue, 22 Nov 2011 03:58:32 +0100, Ojan Vafai o...@chromium.org wrote:

 I think this is the only sane solution to this problem. Lets split up the
 Element interface.


 I think an IDL annotation would work better from a specification
 perspective. E.g. [NoScope].

 Otherwise we'd need to do this interface split for each element type where
 we add new attributes.


Sounds find to me. I'd prefer to have the annotation be [Scope] though. The
default when adding new attributes should be to not scope them.


Re: Adding methods to Element.prototype WAS: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?

2011-11-22 Thread Ojan Vafai
On Tue, Nov 22, 2011 at 10:04 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/22/11 12:57 PM, Ojan Vafai wrote:

 The fewer properties that are exposed this way, the smaller the quirk
 is.


 I think the problem is that from web developers point of view the quirky
 behavior is _not_ exposing properties.  Certainly in the short term...


That's true for a large percentage of developers for sure, but most web
developers I've talked to about this are surprised to learn about this
behavior and have never (intentionally) depended on it.


 In the long term, since we have to expose all expandos, I suspect that not
 exposing properties will continue to be seen as the quirky behavior.

 Note, by the way, that having to expose expandos (including expandos on
 prototypes) but not new built-in properties might make for some fun
 spec-work (e.g., what happens when the standard adds a property but then
 some page overrides it on the prototype with a different property
 definition: should the page-defined value be exposed?).

 Again, some decent data on what pages actually do in on* handlers would be
 really good.  I have no idea how to get it.  :(


I've been trying to get some data on this, but I haven't had much success.
I'll report back if I do. But even if I get data, it'll be for specific
names, not a generic what do pages do in on* handlers, so it wouldn't
actually help resolving this expando question.

 I was hoping that we could have a fixed small list of properties
 that the spec says are exposed. Maybe that's too ambitious and doesn't
 actually buy us much though.


 Given the expando situation, I'm not sure that approach works at all.  :(


Well, it would be a small list + expandos. :)




 -Boris




Re: Adding methods to Element.prototype WAS: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?

2011-11-22 Thread Ojan Vafai
On Tue, Nov 22, 2011 at 4:12 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Nov 22, 2011 at 10:24 AM, Ojan Vafai o...@chromium.org wrote:
  On Tue, Nov 22, 2011 at 10:04 AM, Boris Zbarsky bzbar...@mit.edu
 wrote:
  On 11/22/11 12:57 PM, Ojan Vafai wrote:
  I was hoping that we could have a fixed small list of properties
  that the spec says are exposed. Maybe that's too ambitious and doesn't
  actually buy us much though.
 
  Given the expando situation, I'm not sure that approach works at all.
  :(
 
  Well, it would be a small list + expandos. :)

 This is a feature that is definitely causing severe pain to the
 platform since it's putting constraints on APIs that we can add to our
 main data model, the DOM.

 It would be really awesome if we could figure out a way to fix this.
 I'd say the first step would be to evaluate if we actually need
 expandos. And be prepared to break a few pages by removing support for
 them. If we can agree to do that, then it's likely that we can create
 a small object which forwards a short list of properties to the form
 element (likely including the dynamic list of form element names) and
 only put that object in scope.


Yes! I don't know how we can test this without just pushing out a release
that does this and seeing what breaks though.

Ojan


Adding methods to Element.prototype WAS: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?

2011-11-21 Thread Ojan Vafai
I think this is the only sane solution to this problem. Lets split up the
Element interface. I'm not attached to these names, but something
like ElementExposed and Element. Element inherits (mixins?) ElementExposed
and only the methods on ElementExposed are exposed to the on* lookup chain.

ElementExposed would have everything that is currently on the Element API
and all new methods would go on Element. You could imagine that over time
we could figure out the minimal set of APIs required by web compat to leave
on ElementExposed and move everything else to Element.

In fact, we should do this for form and document as well.

It's a nasty wart to put on the platform, but it's better than being unable
to expose APIs with good names or with exposing APIs with good names and
breaking existing content.

Ojan

On Mon, Nov 21, 2011 at 6:00 PM, Aryeh Gregor a...@aryeh.name wrote:

 On Mon, Nov 21, 2011 at 8:54 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  You're not misunderstanding, but you're wrong.  ^_^  The element
  itself is put in the lookup chain before document.  See this testcase:
 
  !DOCTYPE html
  button onclick=alert(namespaceURI)foo/button
 
  (namespaceURI was the first property I could think of that's on
  Element but not Document.)

 Awesome.  It seems on* is even more pathological than I realized.  So
 definitely, I don't think we want to avoid adding short names to Node
 or Element or Document forever just because of this.  If the cost is
 making bare name lookup in on* slightly more pathological than it
 already is, I don't think that's a big deal.  Authors who want to
 preserve their sanity should already be prefixing everything with
 window. or document. or whatever is appropriate.  Let's add
 .matches() and just make it not triggered as a bare name from on*.




Re: innerHTML in DocumentFragment

2011-11-03 Thread Ojan Vafai
If we can get away with it WRT web compat, we should make
createContextualFragment work context-less and we should make
DocumentFragment.innerHTML work as Yehuda describes. There are clear
use-cases for this that web devs want to do all the time.

I don't see any downside except if the web already depends on the current
behavior of these two cases throwing an error.

On Thu, Nov 3, 2011 at 4:53 PM, Tim Down timd...@gmail.com wrote:

 Yes, now I re-read it, that's clear. Sorry.

 Tim

 On 3 November 2011 23:51, James Graham jgra...@opera.com wrote:
  On Thu, 3 Nov 2011, Tim Down wrote:
 
  Have you looked at the createContextualFragment() method of Range?
 
 
 http://html5.org/specs/dom-parsing.html#dom-range-createcontextualfragment
 
  That doesn't meet the use case where you don't know the contextual
 element
  upfront. As I understand it that is important for some of the use cases.
 
  I think this is possible to solve, but needs an extra mode in the parser.
  Also, createcontextualFragment could be modified to take null as the
 context
  to work in this mode.
 




Re: Is BlobBuilder needed?

2011-10-25 Thread Ojan Vafai
The new API is smaller and simpler. Less to implement and less for web
developers to understand. If it can meet all our use-cases without
significant performance problems, then it's a win and we should do it.

For line-endings, you could have the Blob constructor also take an optional
endings argument:
new Blob(String|Array|Blob|ArrayBuffer data, [optional] String contentType,
[optional] String endings);

On Tue, Oct 25, 2011 at 11:57 AM, Michael Nordman micha...@google.comwrote:

 This ultimately amounts to syntactic sugar compared to the existing api.
 It's tasty, but adds no new functionality. Also there's still the issue of
 how this new api would provide the existing functionality around line
 endings, so less functionality at the moment. I'm not opposed to
 additions/enhancements,  just want to put it in perspective and to question
 whether the api churn is worth it.

 On Mon, Oct 24, 2011 at 10:19 PM, Erik Arvidsson a...@chromium.org wrote:

 On Mon, Oct 24, 2011 at 19:54, Jonas Sicking jo...@sicking.cc wrote:
  Sure. Though you could also just do
 
  var b = new Blob();
  b = new Blob([b, data]);
  b = new Blob([b, moreData]);

 That works for me.


I'm happy with this. In theory, vendors could implement this using
copy-on-write or something similar so that this pattern is roughly as
efficient as BlobBuilder, right?


Re: Is BlobBuilder needed?

2011-10-25 Thread Ojan Vafai
On Tue, Oct 25, 2011 at 12:57 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Oct 25, 2011 at 12:53 PM, Ojan Vafai o...@chromium.org wrote:
  The new API is smaller and simpler. Less to implement and less for web
  developers to understand. If it can meet all our use-cases without
  significant performance problems, then it's a win and we should do it.
 
  For line-endings, you could have the Blob constructor also take an
 optional
  endings argument:
  new Blob(String|Array|Blob|ArrayBuffer data, [optional] String
 contentType,
  [optional] String endings);

 I believe (or at least, I maintain) that we're trying to do
 dictionaries for this sort of thing.  Multiple optional arguments are
 *horrible* unless they are truly, actually, order-dependent such that
 you wouldn't ever specify a later one without already specifying a
 former one.


I agree actually. So, it could be any of the following:
1. new Blob(data, [optional] options)
2. new Blob(options, data...)
3. new Blob([optional] dataAndOptions)

I don't feel strongly, but option 1 seems best to me since it allows simple
usages like 'new Blob(foo)'. On the other hand, option 2 lets you not have
to create an array to append multiple elements to the Blob.


Re: QSA, the problem with :scope, and naming

2011-10-25 Thread Ojan Vafai
On Tue, Oct 25, 2011 at 4:58 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Oct 25, 2011 at 4:56 PM, Ojan Vafai o...@chromium.org wrote:
  On Tue, Oct 25, 2011 at 4:44 PM, Bjoern Hoehrmann derhoe...@gmx.net
 wrote:
 
  * Tab Atkins Jr. wrote:
  Did you not understand my example?  el.find(+ foo, + bar) feels
  really weird and I don't like it.  I'm okay with a single selector
  starting with a combinator, like el.find(+ foo), but not a selector
  list.
 
  Allowing + foo but not + foo, + bar would be really weird.
 
  Tab, what specifically is weird about el.find(+ foo, + bar)?

 Seeing a combinator immediately after a comma just seems weird to me.
 This may just be a personal prejudice.


With my web developer hat on, I would expect the selector list version to
just be a comma-separated list of any valid single selectors. We should
either allow it in both cases or neither IMO. My preference is to allow it.

Ojan


Re: Is BlobBuilder needed?

2011-10-24 Thread Ojan Vafai
On Mon, Oct 24, 2011 at 3:52 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi everyone,

 It was pointed out to me on twitter that BlobBuilder can be replaced
 with simply making Blob constructable. I.e. the following code:

 var bb = new BlobBuilder();
 bb.append(blob1);
 bb.append(blob2);
 bb.append(some string);
 bb.append(myArrayBuffer);
 var b = bb.getBlob();

 would become

 b = new Blob([blob1, blob2, some string, myArrayBuffer]);


I like this API. I think we should add it regardless of whether we get rid
of BlobBuilder. I'd slightly prefer saying that Blob takes varargs and rely
on ES6 fanciness to expand the array into varargs.

In theory, a BlobBuilder could be backed by a file on disk, no? The
advantage is that if you're building something very large, you don't
necessarily need to be using all that memory. You can imagine a UA having
Blobs be fully in-memory until they cross some size threshold.


 or look at it another way:

 var x = new BlobBuilder();
 becomes
 var x = [];

 x.append(y);
 becomes
 x.push(y);

 var b = x.getBlob();
 becomes
 var b = new Blob(x);

 So at worst there is a one-to-one mapping in code required to simply
 have |new Blob|. At best it requires much fewer lines if the page has
 several parts available at once.

 And we'd save a whole class since Blobs already exist.

 / Jonas




Re: Is BlobBuilder needed?

2011-10-24 Thread Ojan Vafai
On Mon, Oct 24, 2011 at 6:40 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Oct 24, 2011 at 4:46 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Mon, Oct 24, 2011 at 4:33 PM, Eric U er...@google.com wrote:
  The only things that this lacks that BlobBuilder has are the endings
  parameter for '\n' conversion in text and the content type.  The
  varargs constructor makes it awkward to pass in flags of any
  sort...any thoughts on how to do that cleanly?
 
  Easy.  The destructuring stuff proposed for ES lets you easily say things
 like:
 
  function(blobparts..., keywordargs) {
   // blobparts is an array of all but the last arg
   // keywordargs is the last arg
  }
 
  or even:
 
  function(blobparts..., {contenttype, lineendings}) {
   // blobparts is an array of all but the last arg
   // contenttype and lineendings are pulled from the
   // last arg, if it's an object with those properties
  }

 The problem is that if the caller has an array, because this is a
 constructor, this will get *very* awkward to do until ES6 is actually
 implemented.


We should pressure ecmascript to stabilize this part of ES6 early so that we
can implement it. That way we can start designing the APIs we want now
instead of almost the APIs we want. That said, I'm not opposed to the array
argument for now. We can always add the destructured version in addition
later.

On the topic of getting rid of BlobBuilder, do you have thoughts on losing
the ability to back it by an on-disk file?


 You can't simply do:

 new Blob.apply(blobarray.concat(text/plain));

 I *think* this is what you'd have to do in a ES5 compliant engine:

 new Blob.bind([null].concat(blobarray, text/plain));

 In ES3 I don't even think that there's a way to do it. Though that
 might not matter assuming everyone gets .bind correctly implemented
 before they implement |new Blob|.

 I don't think the complexity is worth it for a dubious gain. I.e. it's
 not entirely clear to me that the following:

 new Blob(blob1, blob2, mybuffer, blob3, somestring, text/plain);


Could we make the first argument be the contenttype? That makes the vararg
version work better. As it is, text/plain could, theoretically be part of
the content. I guess that's an argument for only doing the array version.
Then the contenttype could come second and be optional. As I said, I don't
feel strongly about this.


 is significantly better than

 new Blob([blob1, blob2, mybuffer, blob3, somestring], text/plain);


 / Jonas




Re: Is BlobBuilder needed?

2011-10-24 Thread Ojan Vafai
On Mon, Oct 24, 2011 at 7:49 PM, Erik Arvidsson a...@chromium.org wrote:

 On Mon, Oct 24, 2011 at 19:23, Jonas Sicking jo...@sicking.cc wrote:
  On the topic of getting rid of BlobBuilder, do you have thoughts on
 losing
  the ability to back it by an on-disk file?
 
  I'm not sure I understand the problem. A Blob can also be backed by a
  on-disk file.
 
  Could you elaborate?

 I think the point is that with the old one you could generate lots of
 data, add that to the blob, generate a lot more data and add that to
 the blob. After every add it might be safe to gc that data. With this
 proposal all that data needs to be in memory at the point of
 construction.

 Could we add a concat like method to Blob that returns a new larger blob?


If concat also took an array and/or varargs, then I'd be happy with this and
getting rid of BlobBuilder.


 var bb = new BlobBuilder();
 bb.append(data);
 bb.append(moreData);
 var b = bb.getBlob();

 var b = new Blob();
 b = b.concat(data);
 b = b.concat(moreData);

 --
 erik



Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Ojan Vafai
Overall, I wholeheartedly support the proposal.

I don't really see the benefit of allowing starting with a combinator. I
think it's a rare case that you actually care about the scope element and in
those cases, using :scope is fine. Instead of element.findAll( div 
.thinger), you use element.findAll(:scope  div  .thinger). That said, I
don't object to considering the :scope implied if the selector starts with a
combinator.

On Tue, Oct 18, 2011 at 6:15 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 10/18/11 7:38 PM, Alex Russell wrote:

 The resolution I think is most natural is to split on ,


 That fails with :any, with the expanded :not syntax, on attr selectors,
 etc.

 You can split on ',' while observing proper paren and quote nesting, but
 that can get pretty complicated.


Can we define it as a sequence of selectors and be done with it? That way it
can be defined as using the same parsing as CSS.


 A minor point is how to order the
 items in the returned flattened list are ordered (document order? the
 natural result of concat()?).


 Document order.


Definitely.


 -Boris





Re: before/after editaction

2011-10-13 Thread Ojan Vafai
On Tue, Aug 30, 2011 at 6:39 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Aug 30, 2011 at 5:07 PM, Ryosuke Niwa rn...@webkit.org wrote:
  On Tue, Aug 30, 2011 at 4:34 PM, Darin Adler da...@apple.com wrote:
 
  My question was not about the undo command. I meant that if I
 implemented
  a handler for the aftereditaction event that changed b tags to strong
 tags,
  how would the undo machinery undo what I had done?
 
  Ah, I see.  So UA won't be able to undo/redo those DOM changes as of now.
   However, authors can use UndoManager and transactions (see
  http://rniwa.com/editing/undomanager.html) to hook into UA's undo
 manager.
   e.g. if authors wanted UAs to manage undo/redo for those changes, they
 can
  make those changes as a managed transaction.

 My suggestion during the above mentioned Toronto meeting was that the
 UA starts a managed transaction before firing the beforeEditAction
 event.  This transaction would record any changes done during the
 beforeEditAction and the afterEditAction event. It would also record
 the changes done by the UA itself as the actual edit action.

 This way, and changes done during the beforeEditAction and
 afterEditAction events would be automatically part of the undo-able
 action created in response to the command. So if the page cancels the
 beforeEditAction in response to a bold command, and instead inserts
 strong elements, then this modification would be undone if the user
 or the page initiates an undo.

 I think this proposal got minuted for the first day.

 The one thing that this doesn't solve is what to do in cases when the
 page wants to replace the current action with a manual transaction
 rather than the managed one usually created. Ideas for how to solve
 that are welcome.


Overall I really like the proposal (both having the events and Jonas's
addition to include them in the undo transaction). We'd fire the
afterEditAction exactly everywhere we currently fire the input event though.
Instead of adding two new events, could we instead add a beforeInput event
as the beforeEditAction. Then add to both beforeInput and input an action
property that is the edit action that was taken.*

The only downside I see to reusing input is that it might complicate Jonas's
suggestion to include edits during the afterEditAction in the undo
transaction. Although, my intuition is that there is *not* web content that
depends on script executed during the input event not entering the undo
stack.

Ojan

* Those of you who read www-dom will remember I proposed this a long time
ago to that working group. At the time, I coupled with this that we should
kill the textInput event as well since beforeInput would be a super-set.
There wasn't really opposition to adding beforeInput/input except that some
felt the DOM Events spec was too far along to add new features and some were
really attached to keeping textInput.


Re: Mutation Observers: a replacement for DOM Mutation Events

2011-09-30 Thread Ojan Vafai
On Fri, Sep 30, 2011 at 12:40 PM, Ms2ger ms2...@gmail.com wrote:

 On 09/29/2011 04:32 PM, Doug Schepers wrote:

 Hi, Adam-

 I'm glad to see some progress on a replacement for Mutation Events.

 Would you be interested in being the editor for this spec? It's already
 in our charter, we just need someone to take it up. Olli has offered
 offlist to be a co-editor, so between the two of you, I think it would
 be pretty manageable.

 I'd be happy to help get you started.


 I repeat my objections to speccing this outside DOM4. I would, of course,
 welcome Olli or Adam to become co-editors if they would wish that.


I expect Adam and Rafael, the two people on the Google side most appropriate
to edit this spec don't care either way. If the rest of the DOM4 editors
would like it in DOM4, and Olli is OK with that, then I'm sure we (Google)
would be OK with it as well.


Re: [editing] Using public-webapps for editing discussion

2011-09-13 Thread Ojan Vafai
I support this.

On Tue, Sep 13, 2011 at 1:30 PM, Ryosuke Niwa rn...@webkit.org wrote:

 I think it's a great idea to get your spec more attention in W3C community
 specially because some UA vendors don't participate in discussions on
 whatwg.

 - Ryosuke

 On Tue, Sep 13, 2011 at 1:27 PM, Aryeh Gregor a...@aryeh.name wrote:

 For the last several months, I was working on a new specification,
 which I hosted on aryeh.name.  Now we've created a new Community Group
 at the W3C to host it:

 http://aryeh.name/spec/editing/editing.html
 http://www.w3.org/community/editing/

 Things are still being worked out, but one issue is what mailing list
 to use for discussion.  I don't want to create new tiny mailing lists
 -- I think we should reuse some existing established list where the
 stakeholders are already present.  Previously I was using the whatwg
 list, but as a token of good faith toward the W3C, I'd prefer to
 switch to public-webapps, even though my spec is not a WebApps WG
 deliverable.  (If it ever does move to a REC track spec, though, which
 the Community Group process makes easy, it will undoubtedly be in the
 WebApps WG.)

 Does anyone object to using this list to discuss the editing spec?





Re: Mutation events replacement

2011-07-20 Thread Ojan Vafai
On Wed, Jul 20, 2011 at 10:30 AM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 07/20/2011 06:46 PM, Jonas Sicking wrote:

  Hence I'm leaning towards using the almost-asynchronous proposal for
 now. If we end up getting the feedback from people that use mutation
 events today that they won't be able to solve the same use cases, then
 we can consider using the synchronous notifications. However I think
 that it would be beneficial to try to go almost-async for now.


 I disagree.


 I had hoped for a bit more of an explanation than that ;-)

 Such as why do you not think that synchronous events will be a problem
 for web developers just like they have been for us?



 In practice synchronous events have been a problem to us because we
 are in C++, which is unsafe language. Web devs use JS.


In many cases, where you would have had a crash in C++, you would have a bug
and/or exception in JS. It's for exactly the same reason. Your code cannot
make assumptions about the state of the DOM because other code may have run
that changes it out from under you. A contrived example:

var firstChild = node.firstChild;
node.appendChild(randomNode); // Some mutation handler runs here that
removes firstChild from the DOM.
alert(firstChild.parentNode.innerHTML); // An exception gets thrown because
firstChild.parentNode is now null.

You can easily imagine more complicated examples that you would easily hit
in the real world if there are multiple libraries acting on the same DOM.


 Web devs usually want something synchronous, like sync XHR
 (sync XHR has other problems not related to mutation handling).
 Synchronous is easier to understand.


 -Olli




 / Jonas







Re: Mutation events replacement

2011-07-05 Thread Ojan Vafai
On Tue, Jul 5, 2011 at 5:36 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Tue, Jul 5, 2011 at 5:27 PM, Rafael Weinstein rafa...@google.comwrote:

 It seems like these are rarified enough cases that visual artifacts
 are acceptable collateral damage if you do this. [Put another way, if
 you care enough about the visual polish of your app that you will put
 energy into avoiding flickr, you probably aren't using alert and
 showModalDialog anyway].

 Also, it's up to the app when to do it, so it's entirely in its
 control (and thus avoid visual artifacts).


 Given that we don't provide an API to control paint in general, I'm not
 convinced that we should add such a requirement in the DOM mutation event
 spec.


Many of the use-cases for mutation events (e.g. model-driven views) are
poorly met if we don't give some assurances here.


  Note that this is a problem with both proposals. Work done in (at
 least some) mutation observers is delayed. If a sync paint occurs
 before it, it's work won't be reflected on the screen.


 Right.  Maybe we can add a note saying that the user agents are recommended
 not to paint before all mutation observers are called.  I don't think we
 should make this a requirement.


There may be a middle ground that isn't so hard to for browser vendors
implement interoperably. Can we require no repaint except in the presence of
a specific list synchronous API calls? I'm sure that's too simplistic, but
I'm hoping someone with more experience can chime in with something that
might actually be a plausible requirement.

- Ryosuke



Re: Mutation events replacement

2011-07-04 Thread Ojan Vafai
Apologies in advance if my comment makes no sense. This is a long thread, I
tried to digest it all. :)

On Sat, Jul 2, 2011 at 7:07 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 That may be ok, if the use cases that incur this cost are rare and the
 common case can be better served by a different approach.

 Or put another way, if 1% of consumers want the full list because it makes
 them 4x faster and the other 99% don't want the full list, and the full list
 is 3x slower for the browser to build than just providing the information
 the 99% want, what's the right tradeoff?


I'm not sure there really is a performance tradeoff. I believe that the
proposal Rafael put forward should almost always be faster. Storing the list
of changes and doing a JS callback once, for nearly all use-cases, should be
faster than frequent, semi-synchronous callbacks.

The only bit that might be slower is what data you include in the mutation
list. I believe that all the data you'd need is cheap except for possibly
the following two:
-The index of the child that changed for ChildListChanged (is this actually
expensive?)
-The old value of an attribute/text node. I know this is expensive in
Gecko's engine at least.

I'd be fine with excluding that information by default, but having a flag
you pass at some point saying to include those. That way, only sites that
need it take the performance hit.


 The numbers above are made up, of course; it would be useful to have some
 hard data on the actual use cases.

 Maybe we need both sorts of APIs: one which generates a fine-grained change
 list and incurs a noticeable DOM mutation performance hit and one which
 batches changes more but doesn't slow the browser down as much...

 -Boris




Re: Mutation events replacement

2011-07-04 Thread Ojan Vafai
On Mon, Jul 4, 2011 at 10:16 AM, Adam Klein ad...@chromium.org wrote:

 On Mon, Jul 4, 2011 at 9:57 AM, Olli Pettay olli.pet...@helsinki.fi
 wrote:
  On 07/04/2011 07:28 PM, Ojan Vafai wrote:
  Apologies in advance if my comment makes no sense. This is a long
  thread, I tried to digest it all. :)
 
  On Sat, Jul 2, 2011 at 7:07 AM, Boris Zbarsky bzbar...@mit.edu
  mailto:bzbar...@mit.edu wrote:
 
 That may be ok, if the use cases that incur this cost are rare and
 the common case can be better served by a different approach.
 
 Or put another way, if 1% of consumers want the full list because it
 makes them 4x faster and the other 99% don't want the full list, and
 the full list is 3x slower for the browser to build than just
 providing the information the 99% want, what's the right tradeoff?
 
  I'm not sure there really is a performance tradeoff. I believe that the
  proposal Rafael put forward should almost always be faster. Storing the
  list of changes and doing a JS callback once, for nearly all use-cases,
  should be faster than frequent, semi-synchronous callbacks.
 
  The only bit that might be slower is what data you include in the
  mutation list. I believe that all the data you'd need is cheap except
  for possibly the following two:
  -The index of the child that changed for ChildListChanged (is this
  actually expensive?)
 
  You may need more than just an index. element.innerHTML = null removes
  all the child nodes.
  And element.inserBefore(some_document_fragment, element.lastChild)
  may insert several child nodes.
  Depending on whether we want to get notified for each mutation
  or batch the mutations, simple index may or may not be enough.

 Would a node reference be better (nextSibling)?  Assuming the
 listeners have access to all inserted/removed nodes along the way,
 using another as an anchor seems like it would work properly (though
 the innerHTML case may need something special).


That sounds great to me. nextSibling seems sufficient.


  -The old value of an attribute/text node. I know this is expensive in
  Gecko's engine at least.
 
  Shouldn't be that slow.
 
  Mutation listener could easily
  implement old/new value handling itself, especially if it knows which
  attributes it is interested in.

 This only works if listeners don't care about intermediate values,
 since all they'll have access to is the last value they saw and the
 current value in the DOM. If it was set several times during a single
 mutation event (whether that be your or Rafael's definition of a
 transaction), they'll miss those in-between values.  Also, while
 this would be acceptable for some use cases, the editing/undo use case
 would need to keep values of all attributes at all nodes, which seems
 likely to be worse than having the UA take care of this.

  I'd be fine with excluding that information by default, but having a
  flag you pass at some point saying to include those. That way, only
  sites that need it take the performance hit.

 Given that different use cases seem to have wildly different
 requirements (some probably only care about one or two attributes
 while others care about the entire document), this approach to
 handling the availability of oldValue/newValue is appealing.

 - Adam



Re: paste events and HTML support - interest in exposing a DOM tree?

2011-05-06 Thread Ojan Vafai
On Tue, May 3, 2011 at 3:20 AM, Hallvord R. M. Steen hallv...@opera.comwrote:

 On Tue, 03 May 2011 07:10:10 +0900, João Eiras joao.ei...@gmail.com
 wrote:

  event.clipboardData.getDocumentFragment()

 which would return a parsed and when applicable sanitized view of any
 markup the implementation supports from the clipboard.


  This is already covered by doing x=createElement;x.innerHTML=foo;traverse
 x


 Of course it is. The point was simply to see if there was interest in
 possibly optimising away an extra serialize-parse roundtrip, if developers
 feel it would be more convenient to get the DOM right away rather than the
 markup.


This sounds good to me. There are good use-cases and it I believe it would
be fairly straightforward to implement. What would getDocumentFragment do if
there is no text/html content on the clipboard? I'm OK with saying it just
returns null or undefined.

Ojan


Re: Improving DOM Traversal and DOM XPath

2011-04-25 Thread Ojan Vafai
On Mon, Apr 25, 2011 at 11:31 AM, Jonas Sicking jo...@sicking.cc wrote:

 First off is document.createTreeWalker and
 document.createNodeIterator. They have the same signature which
 currently is:

 document.createX(root, whatToShow, filter, entityReferenceExpansion);

 Given that entity references are being removed, we should simply
 remove the last argument. Note that this is a backwards compatible
 change since additional arguments to any DOM function are just ignored
 in all browsers I think. Additionally, I see no reason to keep the
 'filter' argument required as it's quite common to leave out.


FWIW, WebKit already has filter and entityReferenceExpansion as optional. I
expect there would be no opposition to making whatToShow optional as well.

We could even make the whatToShow argument optinal and default it to
 SHOW_ALL. Originally I was going to propose that we default it to
 SHOW_ELEMENTS as I had thought that that would be a common value,
 however it appears that SHOW_TEXT is as, if not more, commonly used.
 The downside to defaulting to SHOW_ALL is that people might use the
 default and then do filtering manually, which is slower than having
 the iterator/treewalker do the filtering.


I agree with everything here. I think SHOW_ALL is the default people would
expect.


 I'd like to give some DOM XPath a similar treatment. The following
 three functions could be simplified:

 XPathEvaluator.createExpression(expression, resolver);
 Here I think we can make the 'resolver' argument optional as
 namespaces are commonly not used on the web.

 XPathEvaluator.evaluate(expression, contextNode, resolver, type, result);
 We can make 'resolver', 'type' and 'result' optional. 'type' would
 default to ANY_TYPE(0) and the other two to null.

 XPathExpression.evaluate(contextNode, type, result);
 Here all but the first could be optional. The defaults would be the
 same as for XPathEvaluator.evaluate.


These are already all optional in WebKit. We could also make contextNode
optional by defaulting it to the document, no?


 I'd like to make these changes to firefox, but first I wanted to hear
 what people here think.


I support this. As it is, I believe the verbosity of these methods hurts
their adoption.


 I know we don't have editors for the relevant
 specs, but I think we can make an informal decision that these changes
 sound good if people think they are.

 / Jonas




Re: [DOMCore] fire and dispatch

2011-03-02 Thread Ojan Vafai
On Thu, Mar 3, 2011 at 1:46 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/2/11 5:52 AM, Jonas Sicking wrote:

 I'm not quite sure what you mean by via JSON given that JSON is a
 serialization format.


 The idea would be to take the input object, sanitize it by doing

  obj = JSON.parse(JSON.serialize(obj));

 (which will get rid of crud like getters), and then work with it.


  1. Don't allow getters, i.e. if the object contains any getters, throw
 an exception


 This seems like the simplest solution.


Unless anyone has significant use-cases for getters, we could start with
this. I don't expect code to start depending on throwing an exception
here, so we change it down the road if we need to.

Ojan


Re: [DOMCore] fire and dispatch

2011-03-01 Thread Ojan Vafai
On Tue, Mar 1, 2011 at 7:23 PM, Anne van Kesteren ann...@opera.com wrote:

 On Tue, 01 Mar 2011 09:00:27 +0100, Garrett Smith dhtmlkitc...@gmail.com
 wrote:

 Mouse.click(document.body, {clientX : 10});


 Yeah, that would be simpler. However, we do not really have this pattern
 anywhere in browser APIs and I believe last time we played with objects (for
 namespace support querySelector or some such) it was deemed problematic.


The Chromium extension APIs use this pattern and I think it's gone over well
in that space. For example, see chrome.contextMenus.create at
http://code.google.com/chrome/extensions/contextMenus.html. I don't see a
problem with beginning to introduce this into web APIs, but it would be a
departure from existing APIs.


 An alternative would be I guess what Simon Pieters proposed some time ago.
 That we make event IDL attributes mutable before the event is dispatched.
 And that they would get readonly semantics on setting during dispatch (i.e.
 based on the dispatch flag).


This seems fine to me too.

Ojan


Re: CfC: publish a new Working Draft of DOM Core; comment deadline March 2

2011-02-23 Thread Ojan Vafai
I also support.

On Thu, Feb 24, 2011 at 11:28 AM, Jonas Sicking jo...@sicking.cc wrote:

 I support this.

 / Jonas

 On Wed, Feb 23, 2011 at 8:20 AM, Arthur Barstow art.bars...@nokia.com
 wrote:
  Anne and Ms2ger (representing Mozilla Foundation) have continued to work
 on
  the DOM Core spec and they propose publishing a new Working Draft of the
  spec:
 
http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html
 
  As such, this is a Call for Consensus (CfC) to publish a new WD of DOM
 Core.
  If you have any comments or concerns about this proposal, please send
 them
  to public-webapps by March 2 at the latest.
 
  As with all of our CfCs, positive response is preferred and encouraged
 and
  silence will be assumed to be agreement with the proposal.
 
  -Art Barstow
 
 
 




Re: clipboard events

2011-01-07 Thread Ojan Vafai
Thanks for working on this!

On Wed, Jan 5, 2011 at 2:41 PM, Ryosuke Niwa rn...@webkit.org wrote:

  If the cursor is in an editable element, the default action is to insert
 clipboard data in the most suitable format supported for the given context.

 In an editable context, the paste event's target property refers to the
 element that contains the start of the selection. In a non-editable
 document, the event is targeted at a node focused by clicking or by an
 interactive cursor. If the node that has focus is not a text node, the event
 is targeted at the BODY element.

 I'm not sure if it makes sense for the element to be the start of
 selection.  Why not just pass the root editable element?  The website
 already have access to the selection so it can get start/end of selection.
  Mentioning of clicking or cursor is unnecessary.  One can use keyboard to
 move focus / selection and copy / paste should still work.  I'm not sure why
 we're special-casing text node / non-text node.  Is there some reason to
 this?  (i.e. compatibility with Internet Explorer?)


Ditto on all points. Also, why does this clause not apply to the cut/copy
events as well? What does start mean here? Start in document order e.g. as
opposed to the anchor/base of the selection?

 This .types spec is slightly different from the HTML5 DnD .types -
consistent enough to make sense to authors and implementers?

What's the benefit of being different? In general, the closer we can be to
HTML5 DnD, the easier it will be for everyone.

 Implementations are encouraged to also support text/html for dealing with
HTML-formatted data.

In an ideal world, implementations would support any string type. This would
allow for creating custom clipboard formats as well as common mime types
(e.g. image types). Do most operating systems support setting generic string
types on the clipboard?

Ojan


Re: A URL API

2010-09-21 Thread Ojan Vafai
How about setParameter(name, value...) that takes var_args number of values?
Alternately, it could take either a DOMString or an ArrayDOMString for the
value. I prefer the var_args.

Also, getParameterByName and getAllParametersByName seem unnecessarily
wordy. How about getParameter/getParameterAll to match
querySelector/querySelectorAll? Putting All at the end is admittedly
awkward, but this is the uncommon case, so I'm OK with it for making the
common case less wordy.

Ojan

On Tue, Sep 21, 2010 at 4:56 PM, Adam Barth w...@adambarth.com wrote:

 Ok.  I'm sold on having an API for constructing query parameters.
 Thoughts on what it should look like?  Here's what jQuery does:

 http://api.jquery.com/jQuery.get/

 Essentially, you supply a JSON object containing the parameters.  They
 also have some magical syntax for specifying multiple instances of the
 same parameter name.  I like the easy of supplying a JSON object, but
 I'm not in love with the magical syntax.  An alternative is to use two
 APIs, like we current have for reading the parameter values.

 Adam


 On Mon, Sep 20, 2010 at 11:47 PM, Devdatta Akhawe dev.akh...@gmail.com
 wrote:
  or any webservice that likes to have lots of query parameters - Google
  Search for example.
 
  In general, why would you not want a robust way to make complicated
  queries - those who are making simple queries and prefer simple one
  liners can continue using it.
 
 
  On 20 September 2010 23:42, Darin Fisher da...@chromium.org wrote:
  On Mon, Sep 20, 2010 at 11:02 AM, Garrett Smith dhtmlkitc...@gmail.com
 
  wrote:
 
  On 9/20/10, Julian Reschke julian.resc...@gmx.de wrote:
   On 20.09.2010 18:56, Garrett Smith wrote:
  [...]
   Requests that don't have lot of parameters are often simple
 one-liners:
  
   url = /getShipping/?zip= + zip + pid= + pid;
  
   That's exactly the kind of code that will fail once pid and zip
   contain things you don't expecz.
  
   What XHRs have complicated URL with a lot of query parameters?
  
   What XHRs?
  
  IOW, what are the cases where an XHR instance wants to use a lot o
 query
  params?
 
 
  Probably when speaking to a HTTP server designed to take input from an
 HTML
  form.
  -Darin
 
 




Re: A URL API

2010-09-21 Thread Ojan Vafai
appendParameter/clearParameter seems fine to me.

On Wed, Sep 22, 2010 at 2:53 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Mon, Sep 20, 2010 at 11:56 PM, Adam Barth w...@adambarth.com wrote:
  Ok.  I'm sold on having an API for constructing query parameters.
  Thoughts on what it should look like?  Here's what jQuery does:
 
  http://api.jquery.com/jQuery.get/
 
  Essentially, you supply a JSON object containing the parameters.  They
  also have some magical syntax for specifying multiple instances of the
  same parameter name.  I like the easy of supplying a JSON object, but
  I'm not in love with the magical syntax.  An alternative is to use two
  APIs, like we current have for reading the parameter values.

 jQuery's syntax isn't magical - the example they give using the query
 param name of 'choices[]' is doing that because PHP requires a [] at
 the end of the query param name to signal it that you want multiple
 values.  It's opaque, though - you could just as easily have left off
 the '[]' and it would have worked the same.

 The switch is just whether you pass an array or a string (maybe they
 support numbers too?).

 I recommend the method be called append*, so you can use it both for
 first sets and later additions (this is particularly useful if you're
 just looping through some data).  This obviously would then need a
 clear functionality as well.

 ~TJ




Re: A URL API

2010-09-20 Thread Ojan Vafai
On Mon, Sep 20, 2010 at 4:27 PM, Adam Barth w...@adambarth.com wrote:

 On Sun, Sep 19, 2010 at 10:48 PM, Devdatta Akhawe dev.akh...@gmail.com
 wrote:
  1) There are now two methods for getting at the URL parameters.  The
 
  and none for setting them?

 That's correct.  Looking at various libraries, there seems to be much
 more interested in paring out query parameters than for constructing
 them.  One popular JavaScript library did have an API that took a
 dictionary and built a query string out of it.  I imagine most folks
 just use the HTML Form element.


That's not true of Google's Closure library:
http://closure-library.googlecode.com/svn/docs/class_goog_Uri.html (see the
set* methods).


Re: widget example of CORS and UMP

2010-05-14 Thread Ojan Vafai
On Fri, May 14, 2010 at 12:00 PM, Tyler Close tyler.cl...@gmail.com wrote:

 On Fri, May 14, 2010 at 11:27 AM, Dirk Pranke dpra...@chromium.org
 wrote:
  You are correct that it is possible to use CORS unsafely. It is possible
 to use
  UMP unsafely,

 Again, that is broken logic. It is possible to write unsafe code in
 C++, but it is also possible to write unsafe code in Java, so there's
 no security difference between the two languages. Please, this
 illogical argument needs to die.


This feels like a legal proceeding. Taken out of context, this sounds
illogical, in the context of the rest of the paragraph Dirk's point makes
perfect sense. In the same way that CORS has security problems, so does UMP.

For example, I don't understand how UMP can ever work with GET requests.
Specifically, how do you deal with users sharing URLs with malicious
parties? Or is that not considered a problem?

Ojan


Re: Chromium's support for CORS and UMP

2010-05-12 Thread Ojan Vafai
On Mon, May 10, 2010 at 4:34 PM, Dirk Pranke dpra...@chromium.org wrote:

 3) UMP appears to be nearly a subset of CORS, and does have a lot of
 nice properties for security and simplicity. We support UMP and would
 like to see the syntax continue to be unified with CORS so that it is
 in fact a subset (I believe this is already happening). We also
 (mostly) support UMP being a separate spec so that web authors can
 read it without being bogged down by the additional complexity CORS
 offers. If there is a good editorial way to handle this in a single
 spec, that would probably be fine.


There turned out to be a good deal of mis-communication around this. My read
of the chromium-dev thread and the following public-webapps discussion is
that Chromium supports there being a document (call it a spec if you will)
for web developers that explains how to use the UMP subset of CORS and is
referenced from the CORS spec.

Ojan


Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Ojan Vafai
On Wed, May 12, 2010 at 9:01 AM, Tyler Close tyler.cl...@gmail.com wrote:

 In the general case, including many common cases, doing this
 validation is not feasible. The CORS specification should not be
 allowed to proceed through standardization without providing
 developers a robust solution to this problem.

 CORS is a new protocol and the WG has been made aware of the security
 issue before applications have become widely dependent upon it. The WG
 cannot responsibly proceed with CORS as is.


Clearly there is a fundamental philosophical difference here. The end result
is pretty clear:
1. Every implementor except Caja is implementing CORS and prefers a unified
CORS/UMP spec.
2. Some implementors are unwilling to implement a separate UMP spec.

The same arguments have been hashed out multiple times. The above is not
going to change by talking through them again.

Blocking the CORS spec on principle is meaningless at this point. Even if
the spec were not officially standardized. It's shipping in browsers. It's
not going to be taken back.

Realistically, UMP's only hope of actually getting wide adoption is if it's
part of the CORS spec. Can you focus on improving CORS so that it addresses
your concerns as much as realistically possible?

Ojan


Re: UMP / CORS: Implementor Interest

2010-05-11 Thread Ojan Vafai
On Tue, May 11, 2010 at 11:17 AM, Tyler Close tyler.cl...@gmail.com wrote:

 On Tue, May 11, 2010 at 10:54 AM, Anne van Kesteren ann...@opera.com
 wrote:
  On Tue, 11 May 2010 19:48:57 +0200, Tyler Close tyler.cl...@gmail.com
  wrote:
  Firefox, Chrome and Caja have now all declared an interest in
  implementing UMP. Opera and Safari have both declared an interest in
  implementing the functionality defined in UMP under the name CORS. I


I would put Chrome in the same camp as Opera and Safari based off the
chromium-dev thread. Although, I think the distinction might lie in the
misunderstanding below.


  In the discussion on chromium-dev, Adam Barth wrote:
 
  
  Putting these together, it looks like we want a separate UMP
  specification for web developers and a combined CORS+UMP specification
  for user agent implementors.  Consequently, I think it makes sense for
  the working group to publish UMP separately from CORS but have all the
  user agent conformance requirements in the combined CORS+UMP document.
  

snip

  I think this is a satisfactory compromise and conclusion to the
  current debate. Anne, are you willing to adopt this strategy? If so, I
  think there needs to be a normative statement in the CORS spec that
  identifies the algorithms and corresponding inputs that implement UMP.
 
  I don't understand. As far as I can tell Adam suggests making UMP an
  authoring guide.

 I read Adam as saying the UMP specification should be published. The
 words authoring guide don't appear. I believe his reference to a
 benefit for web developers refers to an opinion expressed earlier in
 the thread that the UMP specification is more easily understood by web
 developers.


What is the difference between an authoring guide and a specification for
web developers? The key point of making this distinction is that
implementors should be able to look solely at the combined spec.

Ojan


Re: UMP / CORS: Implementor Interest

2010-04-21 Thread Ojan Vafai
On Wed, Apr 21, 2010 at 10:39 AM, Tyler Close tyler.cl...@gmail.com wrote:

 On Tue, Apr 20, 2010 at 10:07 PM, Anne van Kesteren ann...@opera.com
 wrote:
  On Wed, 21 Apr 2010 01:27:10 +0900, Tyler Close tyler.cl...@gmail.com
  wrote:
 
  Why can't it be made exactly like UMP? All of the requirements in UMP
  have been discussed at length and in great detail on this list by some
  highly qualified people. The current UMP spec reflects all of that
  discussion. By your own admission, the CORS spec has not received the
  same level of review for these features. Why hasn't CORS adopted the
  UMP solution?
 
  Because I've yet to receive detailed feedback / proposals on CORS on what
  needs changing. In another thread Maciej asked you whether you would like
 to
  file the appropriate bugs and the he would do so if you did not get
 around
  to it. I have not seen much since.

 The email you refer to listed several specific problems with CORS. As
 you've noted, Maciej agreed these were problems. Now you're telling us
 that as editor for the WG you have decided to ignore this detailed
 feedback because it is not yet filed as official Issues against CORS.
 Instead, you are choosing to ignore UMP and press ahead trying to gain
 implementer support for the mechanism defined in CORS, even though you
 know there are agreed problems with it.

 A different approach, would be to recognize the value of all the work
 and analysis the WG has put into UMP and so explore how CORS could
 reference and leverage this work. I am happy to collaborate with you
 on this task if you'd like to make the attempt.


I've been watching the CORS/UMP debate from the sidelines. Here's how it
looks to me:
1) UMP folk want to keep UMP a separate spec so that it can (theoretically)
be easier to implement and ship sooner.
2) Browser vendors intend to implement CORS. They don't want to have two
similar but slightly different stacks for making requests, either in
implementation or in what's exposed to developers. So, having UMP as a
separate spec doesn't make sense if it's intended to be a subset (or even
mostly a subset) of CORS. Mozilla might be willing to implement UMP with
some API modifications and Microsoft hasn't voiced an opinion.

Is that an accurate summary?

Are there other advantages to keeping UMP a separate spec other than
concerns of ship dates? Given the lack of vendor support, it doesn't seem
like ship date is really a good argument since the ship date is dependent
on vendors actually implementing this.

Ojan


Re: File API Feedback

2009-06-19 Thread Ojan Vafai
On Fri, Jun 19, 2009 at 2:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Jun 18, 2009 at 8:30 PM, Ian Hicksoni...@hixie.ch wrote:
  On Fri, Jun 19, 2009 at 4:13 AM, Arun Ranganathana...@mozilla.com
 wrote:
   Hixie, I think a Base64 representation of the file resource may be
   sufficient, particularly for the image use case (which is how it is
 used
   already).  Can you flesh out why the new schema is a good idea?


snip.../snip



 it would definitely be nice
 if you could display a preview of the file no matter how big the file
 was, but it seems like we can get very far without it.


What are the URL length limitations imposed by user agents?
A quick search does not show any hard limits outside of IE's ~2k
limit. Presumably
IE could be convinced to increase that for data URLs.

If the answer is 2k, then toDataURI is useless in practice and should be
dropped from the spec, even if we don't replace it with something else. If
the answer is 1GB, then at least it will be useful for the vast majority of
use cases (i.e. pictures, youtube-sized videos, etc).

Do we have any of this data for Gecko, Opera, WebKit?

Ojan