Re: [Custom Elements] They are globals.

2016-04-11 Thread Ryosuke Niwa

> On Apr 11, 2016, at 2:29 PM, /#!/JoePea  wrote:
> 
> What if custom elements can be registered on a shadow-root basis, so
> that the author of a Custom Element (one that isn't registered by
> default) can register a bunch of elements that it's shadow root will
> use, passing constructors that the author code may have imported from
> anywhere. Then that same author simply exports the custom element
> (does not register it) for a following author to use. The following
> author would import that custom element, then register it with the
> shadow roots of his/her new custom elements, and thus the cycle
> continues, with registered elements defined on shadow roots on a
> per-custom-element basis, without those elements ever being registered
> to some global registry.
> 
> ```
> // library code
> import SomeClass from './somewhere'
> 
> export default
> class AuthorElement extends HTMLElement {
>constructor() {
>this.shadowRoot.defineElement(
>'authors-should-call-this-anything-they-want',
>SomeClass
>)
>// ...
>}
> }
> ```

Now, let's say you do "new SomeClass" now, and SomeClass is defined as follows:

```js
class SomeClass extends HTMLElement {
constructor()
{
super(); // (1)
}
}
```

When HTMLElement's constructor is involved in (1), it needs to construct an 
element by the name of "authors-should-call-this-anything-they-want".  However, 
it has no idea to which shadow root or document it belongs.  The fundamental 
here is that any construction of new element now needs to specify the shadow 
root or document for which it is created so simple "new SomeClass" would not 
work.  You'd have to instead write it as "new SomeClass(this.shadowRoot)" and 
then (1) needs to be modified as: `super(..arguments)` to pass the argument 
along to the HTMLElement constructor.

- R. Niwa




Re: [Custom Elements] They are globals.

2016-04-11 Thread Ryosuke Niwa

> On Apr 11, 2016, at 9:02 AM, /#!/JoePea  wrote:
> 
> Is it possible to take an approach more similar to React where Custom 
> Elements aren't registered in a global pool? What if two libraries we'd like 
> to use define elements of the same name, and we wish to import these via 
> `import` and not modify the source code of our dependencies?
> 
> I don't really see the solution yet (if any), since the browser needs to know 
> about the elements in order to make them work.
> 
> Any thoughts? Is a more encapsulated approach possible?

We discussed a similar issue related to having multiple documents per global 
object: https://github.com/w3c/webcomponents/issues/369 


The problem here is that HTMLElement constructor, which is involved as a super 
call in a custom element constructor, cannot determine which set of custom 
elements to use because it doesn't get any contextual information about where 
the element is constructed.

- R. Niwa



Re: [Custom Elements] More ES6-like API

2016-04-11 Thread Ryosuke Niwa
That's exactly what we're doing. The latest spec uses ES6 class constructor to 
define custom elements. See an example below this section in DOM spec: 
https://dom.spec.whatwg.org/#concept-element-custom-element-state

- R. Niwa

> On Apr 10, 2016, at 7:58 PM, /#!/JoePea  wrote:
> 
> It'd be nice if users could define actual constructors, as described here:
> 
> https://github.com/w3c/webcomponents/issues/423#issuecomment-208131046
> 
> Cheers!
> - Joe
> 
> /#!/JoePea
> 



Re: Telecon / meeting on first week of April for Web Components

2016-03-23 Thread Ryosuke Niwa

> On Mar 23, 2016, at 6:04 AM, Chaals McCathie Nevile  
> wrote:
> 
> On Tue, 22 Mar 2016 04:17:07 +0100, Hayato Ito  wrote:
> 
>> Either option is okay to me. I'll attend the meeting from Tokyo.
> 
> I'll attend from Europe. Is there a preferred day, and how long do you 
> anticipate this being?

I think the first or the second week of April would be good.  We just need to 
go through each issue and make sure we're on the same page so I'd expect it to 
be 1-2 hours.

> Should we be trying to set up a day or so face to face as well? (And while 
> we're at it, do people expect to go to TPAC and want to meet there?)

I think meeting at TPAC makes sense.  I'm not certain if there's enough 
interests for F2F between April and TPAC but I'm happy to attend if someone 
were to organize one.

- R. Niwa




Re: Telecon / meeting on first week of April for Web Components

2016-03-21 Thread Ryosuke Niwa
For people participating from Tokyo and Europe, would you prefer having it in 
early morning or late evening?

Because Bay Area, Tokyo, and Europe are almost uniformly distributed across the 
timezone, our time slots are limited:
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160405=900=248=268
 


Do people from Tokyo can participate in the meeting around midnight?

If so, we can schedule it at UTC 3PM, which is 8AM in bay area, midnight in 
Tokyo, and 5AM in Europe.

Another option is at UTC 7AM, which is 11PM in bay area, 3PM in Tokyo, and 8AM 
in Europe.

- R. Niwa



Re: Meeting for selection API at TPAC?

2016-03-19 Thread Ryosuke Niwa

> On Mar 16, 2016, at 12:46 PM, Chaals McCathie Nevile <cha...@yandex-team.ru> 
> wrote:
> 
> On Wed, 16 Mar 2016 16:42:07 +0100, Ryosuke Niwa <rn...@apple.com> wrote:
> 
>> Thanks for the rely.
>> 
>> Léonie & Chaals, could we allocate a time slot to discuss selection?
> 
> Sure. Any preferences for minimising clashes? How long do you think you need?

Please avoid conflicts with any CSS / Perf / Editing meetings.  I'd imagine we 
need 1-2 hours.

- R. Niwa




Re: Meeting for selection API at TPAC?

2016-03-18 Thread Ryosuke Niwa
Thanks for the rely.

Léonie & Chaals, could we allocate a time slot to discuss selection?

> On Mar 15, 2016, at 8:33 PM, Yoshifumi Inoue <yo...@google.com> wrote:
> 
> Sorry for late response.
> 
> I would like to discuss following topics:
> 1. Selection for Shadow DOM
> 2. Multiple Range support
> 3. Resolution of open issues towards FPWD
> 
> - yosi
> 
> 2016年3月15日(火) 9:21 Ryosuke Niwa <rn...@apple.com <mailto:rn...@apple.com>>:
> Hi all,
> 
> Is there any interest in discussing selection API at TPAC?
> 
> There are 32 open issues on Github at the moment:
> https://github.com/w3c/selection-api/issues 
> <https://github.com/w3c/selection-api/issues>
> 
> - R. Niwa
> 
> 



Meeting for selection API at TPAC?

2016-03-14 Thread Ryosuke Niwa
Hi all,

Is there any interest in discussing selection API at TPAC?

There are 32 open issues on Github at the moment:
https://github.com/w3c/selection-api/issues

- R. Niwa




Telecon / meeting on first week of April for Web Components

2016-03-14 Thread Ryosuke Niwa
Hi all,

We've been making a good progress on shadow DOM and custom elements API but 
there seems to be a lot of open questions still.  I've asked a couple of people 
involved in the discussion, and there seems to be an interest for having 
another tele-conference or an in-person meeting.

Can we schedule one in the second week of April (April 4th through 8th)?

- R. Niwa




Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-26 Thread Ryosuke Niwa

> On Feb 26, 2016, at 3:36 PM, Elliott Sprehn <espr...@chromium.org> wrote:
> 
> 
> 
> On Fri, Feb 26, 2016 at 3:31 PM, Ryosuke Niwa <rn...@apple.com> wrote:
>> 
>> > On Feb 26, 2016, at 3:22 PM, Elliott Sprehn <espr...@chromium.org> wrote:
>> >
>> >
>> >
>> > On Fri, Feb 26, 2016 at 3:09 PM, Ryosuke Niwa <rn...@apple.com> wrote:
>> >>
>> >> > On Feb 24, 2016, at 9:06 PM, Elliott Sprehn <espr...@chromium.org> 
>> >> > wrote:
>> >> >
>> >> > Can you give a code example of how this happens?
>> >>
>> >> For example, execCommand('Delete') would result in sequentially deleting 
>> >> nodes as needed.
>> >> During this compound operation, unload events may fire on iframes that 
>> >> got deleted by this operation.
>> >>
>> >> I would like components to be notified that they got removed/disconnected 
>> >> from the document
>> >> before such an event is getting fired.
>> >>
>> >
>> > I'd rather not do that, all the sync script inside editing operations is a 
>> > bug, and you shouldn't depend on the state of the world around you in 
>> > there anyway since all browsers disagree (ex. not all of them fire the 
>> > event sync).
>> 
>> I don't think that's a bug given Safari, Chrome, and Gecko all fires unload 
>> event before finishing the delete operation.  It's an interoperable 
>> behavior, which should be spec'ed.
> 
> Firefox's behavior of when to fire unload definitely doesn't match Chrome or 
> Safari, but maybe it does in this one instance. I don't think it's worth 
> trying to get consistency there though, unload is largely a bug, we should 
> add a new event and get people to stop using it.

I strongly disagree with the assessment and oppose to adding a new event.

>> Anyway, this was just an easy example I could come up with.  There are many 
>> other examples that involve DOM mutation events if you'd prefer seeing those 
>> instead.
> 
> I'm not interested in making using mutation events easier.

And I'm not interested in pushing your agenda to deprecate mutation events 
either.

>> The fact of matter is that we don't live in the future, and it's better for 
>> API to be consistent in this imperfect world than for it to have weird edge 
>> cases.  As a matter of fact, if you end up being able to kill those sync 
>> events in the future, this will become non-issue since end-of-nano-task as 
>> you (Google) proposed will happen before dispatching of any event.
>> 
>> As things stand, however, we should dispatch lifecycle callbacks before 
>> dispatching these (legacy but compat mandating) events.
> 
> I disagree. Mutation events are poorly speced and not interoperably 
> implemented across browsers. I don't think we should run nanotasks down there.

It doesn't matter how poorly things are spec'ed.  The reality is that unload 
events and DOM mutation events are required for Web compatibility, and as such, 
should be considered when designing new API.

Anyhow, I'm going to invoke lifecycle callbacks before dispatching events in 
WebKit for now since we can't seem to come to consensus on this matter.

- R. Niwa




Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-26 Thread Ryosuke Niwa

> On Feb 26, 2016, at 3:22 PM, Elliott Sprehn <espr...@chromium.org> wrote:
> 
> 
> 
> On Fri, Feb 26, 2016 at 3:09 PM, Ryosuke Niwa <rn...@apple.com> wrote:
>> 
>> > On Feb 24, 2016, at 9:06 PM, Elliott Sprehn <espr...@chromium.org> wrote:
>> >
>> > Can you give a code example of how this happens?
>> 
>> For example, execCommand('Delete') would result in sequentially deleting 
>> nodes as needed.
>> During this compound operation, unload events may fire on iframes that got 
>> deleted by this operation.
>> 
>> I would like components to be notified that they got removed/disconnected 
>> from the document
>> before such an event is getting fired.
>> 
> 
> I'd rather not do that, all the sync script inside editing operations is a 
> bug, and you shouldn't depend on the state of the world around you in there 
> anyway since all browsers disagree (ex. not all of them fire the event sync).

I don't think that's a bug given Safari, Chrome, and Gecko all fires unload 
event before finishing the delete operation.  It's an interoperable behavior, 
which should be spec'ed.

Anyway, this was just an easy example I could come up with.  There are many 
other examples that involve DOM mutation events if you'd prefer seeing those 
instead.

The fact of matter is that we don't live in the future, and it's better for API 
to be consistent in this imperfect world than for it to have weird edge cases.  
As a matter of fact, if you end up being able to kill those sync events in the 
future, this will become non-issue since end-of-nano-task as you (Google) 
proposed will happen before dispatching of any event.

As things stand, however, we should dispatch lifecycle callbacks before 
dispatching these (legacy but compat mandating) events.

- R. Niwa




Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-26 Thread Ryosuke Niwa

> On Feb 24, 2016, at 9:06 PM, Elliott Sprehn  wrote:
> 
> Can you give a code example of how this happens?

For example, execCommand('Delete') would result in sequentially deleting nodes 
as needed.
During this compound operation, unload events may fire on iframes that got 
deleted by this operation.

I would like components to be notified that they got removed/disconnected from 
the document
before such an event is getting fired.

```html


ElementWithInDocumentFlag extends HTMLElement {
  constructor(...args) {
 super(...args);
 this.inDocument = false;
  }

  connectedWithDocument() {
 this.inDocument = true;
  }

  disconnectedWithDocument() {
 this.inDocument = false;
  }
}

document.defineElement('hello-world', ElementWithInDocumentFlag);



  
  
  sucks



var helloWorld = document.querySelector('hello-world');
var container = document.getElementById('editor');

setTimeout(function () {
  document.querySelector('iframe').contentWindow.onunload = function () {
console.log(container.innerHTML); // This does not contain hello-world
console.assert(!helloWorld.inDocument); // This should be false!
  }
  container.focus();
  getSelection().selectAllChildren(container);
  setTimeout(function () {
document.execCommand('Delete');
  }, 500);
}, 500);


```

- R. Niwa




Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-24 Thread Ryosuke Niwa

> On Feb 23, 2016, at 1:16 AM, Anne van Kesteren <ann...@annevk.nl> wrote:
> 
> On Tue, Feb 23, 2016 at 5:26 AM, Ryosuke Niwa <rn...@apple.com> wrote:
>> Hi,
>> 
>> We propose to change the lifecycle callback to be fired both before invoking 
>> author scripts (e.g. for dispatching events) and before returning to author 
>> scripts.
>> 
>> Without this change, event listeners that call custom elements' methods 
>> would end up seeing inconsistent states during compound DOM operation such 
>> as Range.extractContents and editing operations, and we would like to avoid 
>> that as much as possible.
> 
> These are the events we wanted to try and delay to dispatch around the
> same time lifecycle callbacks are supposed to be called?

Yeah, I'm talking about focus, unload, etc... and DOM mutation events.  It's 
possible that we can make all those event async in the future but that's not 
the current state of the world, and we would like to keep the custom elements' 
states consistent for authors.

- R. Niwa




Re: [custom-elements] Steps inside HTMLElement's constructor

2016-02-22 Thread Ryosuke Niwa

> On Feb 22, 2016, at 10:46 PM, Ryosuke Niwa <rn...@apple.com> wrote:
> 
> Here are steps to construct a custom element as agreed during Jan F2F as I 
> promised to write down [1] [2]:

There's a very appealing alternative to this, which doesn't involve having a 
element construction stack per definition.

We add an extra argument, let us call it exoticNewTarget, to [[Construct]] 
internal method [7], which is initially Null.  More precisely, [[Construct]] 
now takes arguments (a List of any, Object, Object) where the third argument is 
a newly created exotic object.

Add a new environmental records field, [[ExoticNewTarget]], which is either an 
Object or undefined. If this Environment Record was created by the 
[[Construct]] internal method, [[ExoticNewTarget]] is the value of the 
[[ExoticNewTarget]] exoticNewTarget parameter. Otherwise, its value is 
undefined.

Add a new abstract operation GetExoticNewTarget(), which performs the following 
steps:
1. Let envRec be GetThisEnvironment().
2. Assert: envRec has a [[ExoticNewTarget]] field.
3. Return envRec.[[ExoticNewTarget]].

We also modify step 7 of runtime semantics of SuperCall from:
7. Let result be Construct(func, argList, newTarget).
to
7. Let result be Construct(func, argList, newTarget, GetExoticNewTarget()).

With these simple changes, we can simplify the algorithm as follows and it 
would ALWYAS construct the right element:


== Custom Element Construction Algorithm ==

Input
 NAME, the custom element name.
 DOCUMENT, the owner document for new custom element.
 EXOTIC-TARGET, the target Element to be constructed / upgraded.
OUTPUT
 ELEMENT, new custom element instance.
 ERROR, could be either "None", "NotFound", "InvalidStateError", or an 
ECMAScript exception.

1. Let ERROR be "None".
2. Let REGISTRY be the (custom element) registry of DOCUMENT.
3. If DOCUMENT is an HTML document, let NAME be converted to ASCII lowercase.
4. Let DEFINITION be the element definition of with the local name, NAME, in 
REGISTRY.
5. If there is no matching definition, set ERROR to "NotFound" and terminate 
these steps.
7. Invoke the [[Construct]] internal method [3] on the custom element 
interface, INTERFACE, of DEFINITION
   with (INTERFACE, an empty list, INTERFACE, EXOTIC-TARGET)
9. If the [[Construct]] invocation resulted in an exception, set ERROR to the 
raised exception, and terminate these steps.
10. Otherwise, let ELEMENT be the result of the invocation.
11. If ELEMENT is not an instance of INTERFACE with local name, NAME, set ERROR 
to "InvalidStateError", and terminate these steps.


== HTMLElement constructor ==

1. Let TARGET be GetNewTarget(). [4]
2. Let EXOTIC-TARGET be GetExoticNewTarget().
3. If EXOTIC-TARGET is not undefined, return EXOTIC-TARGET and terminate these 
steps.
4. Let DOCUMENT be the associated document of the global object (the result of 
GetGlobalObject() [5]).
5. Let REGISTRY be the (custom element) registry of DOCUMENT.
6. Let DEFINITION be the element definition with the element interface, TARGET, 
in REGISTRY.
7. If there is no matching definition, throw TypeError and terminate these 
steps.
8. Let NAME be the local name of DEFINITION.
9. Return a new element that implements HTMLElement, with no attributes, 
namespace set to the HTML namespace,
   local name set to NAME, and node document set to DOCUMENT.

[7] http://www.ecma-international.org/ecma-262/6.0/#table-6
[8] 
http://www.ecma-international.org/ecma-262/6.0/#sec-super-keyword-runtime-semantics-evaluation




[custom-elements] Steps inside HTMLElement's constructor

2016-02-22 Thread Ryosuke Niwa
Hi all,

Here are steps to construct a custom element as agreed during Jan F2F as I 
promised to write down [1] [2]:

Modify http://w3c.github.io/webcomponents/spec/custom/#dfn-element-definition 
as follows:
The element definition describes a custom element and consists of:

 * custom element type,
 * local name,
 * namespace,
 * custom element interface,
 * lifecycle callbacks, and
 * element construction stack.

Each element construction stack is initially empty, and each entry is an 
instance of Element or a "AlreadyConstructed" marker.

Non-Normative Note: We need a stack per element definition to allow 
construction of other custom elements inside a custom element's constructor. 
Without such a stack per element definition, we would end up walking through 
entries in the stack to find the "right" entry.  Implementors are free to take 
such an approach to minimize the memory usage, etc..., but there are a lot of 
edge cases that need to be taken care of, and it's not a great way to spec an 
interoperable behavior.


== Custom Element Construction Algorithm ==

Input
  NAME, the custom element name.
  DOCUMENT, the owner document for new custom element.
  EXOTIC-TARGET, the target Element to be constructed / upgraded.
OUTPUT
  ELEMENT, new custom element instance.
  ERROR, could be either "None", "NotFound", "InvalidStateError", or an 
ECMAScript exception.

1. Let ERROR be "None".
2. Let REGISTRY be the (custom element) registry of DOCUMENT.
3. If DOCUMENT is an HTML document, let NAME be converted to ASCII lowercase.
4. Let DEFINITION be the element definition of with the local name, NAME, in 
REGISTRY.
5. If there is no matching definition, set ERROR to "NotFound" and terminate 
these steps.
6. Otherwise, push a new entry, EXOTIC-TARGET, to the element construction 
stack of DEFINITION.
7. Invoke the [[Construct]] internal method [3] on custom element interface of 
DEFINITION.
8. Pop the entry from the element construction stack of DEFINITION.
9. If the [[Construct]] invocation resulted in an exception, set ERROR to the 
raised exception, and terminate these steps.
10. Otherwise, let ELEMENT be the result of the invocation.
11. If ELEMENT is not the same Object value as EXOTIC-TARGET, set ERROR to 
"InvalidStateError", and terminate these steps.

Non-Normative Note: we can modify step 4 to support non-HTML elements in the 
future. In step 11, ELEMENT can be different from EXOTIC-TARGET if the custom 
element's constructor instantiates another instance of the same custom element 
before calling super().


== HTMLElement constructor ==

Non-Normative Note: HTMLElement's constructor is called via super() call inside 
the custom element constructor.

1. Let TARGET be GetNewTarget(). [4]
2. Let DOCUMENT be the associated document of the global object (the result of 
GetGlobalObject() [5]).
3. Let REGISTRY be the (custom element) registry of DOCUMENT.
4. Let DEFINITION be the element definition with the element interface, TARGET, 
in REGISTRY.
5. If there is no matching definition, throw TypeError and terminate these 
steps.
6. Let NAME be the local name of DEFINITION.
7. If the element construction stack of DEFINITION is empty,
   1. Return a new element that implements HTMLElement, with no attributes, 
namespace set to the HTML namespace,
  local name set to NAME, and node document set to DOCUMENT.
8. Otherwise, let INSTANCE be the last entry in the element construction stack 
(i.e. in LIFO).
9. If INSTANCE is a "AlreadyConstructed" marker, throw InvalidStateError and 
terminate these steps.
10. Otherwise, replace the last entry in the element construction stack with a 
"AlreadyConstructed" marker.
11. Return INSTANCE.

Non-Normative Note: step 7.1. is like step 4 in createElement [6] and happens 
when author script instantiates a custom element without going through DOM. 
e.g. "new X". Checks in Step 9 and 10 are needed when author scripts constructs 
invokes super() multiple times inside a custom element constructor. Step 9 is 
sufficient for the Custom Element Construction Algorithm to fail because it 
checks the exception in step 9. Step 5 could throw NotSupportedError instead if 
people would prefer that.


[1] https://github.com/w3c/WebPlatformWG/blob/gh-pages/meetings/25janWC.md
[2] https://www.w3.org/2016/01/25-webapps-minutes.html
[3] 
http://www.ecma-international.org/ecma-262/6.0/#sec-ecmascript-function-objects-construct-argumentslist-newtarget
[4] http://www.ecma-international.org/ecma-262/6.0/#sec-getnewtarget
[5] http://www.ecma-international.org/ecma-262/6.0/#sec-getglobalobject
[6] https://dom.spec.whatwg.org/#dom-document-createelement


- R. Niwa




Re: TPAC 2016 - meetings

2016-02-22 Thread Ryosuke Niwa
I'd like to attend Web Perf WG's meeting so it would be ideal if any meetings 
held for Web Apps WG didn't overlap with those of Web Perf WG's.

> On Feb 10, 2016, at 4:34 AM, Chaals McCathie Nevile  
> wrote:
> 
> Dear all,
> 
> as you probably know, the W3C will hold its Technical Plenary meeting this 
> year in Lisbon, September 19-23.
> 
> Rather than meet for several days in plenary, with an hour or two for any 
> given topic we are considering an approach that gives more focused time to a 
> few important areas of work.
> 
> We propose people ask for time to work on a particular area, for example "a 
> day for editing and UI events", or "2 hours for File APIs".
> 
> If there are other topics you don't want to clash with, e.g. because of 
> significant overlap, please say so.
> 
> We will have to reserve the physical spaces by the end of March, which means 
> while we might be able to shuffle individual meetings a bit, we need to know 
> pretty soon what sort of meetings we should be trying to accommodate.
> 
> We anticipate having a plenary session as some part of the final day, which 
> will mostly be a quick wrap on what happened for each group that met, and a 
> quick run-down of all the other work we have - much as webapps has 
> traditionally done at the *start* of its TPAC meetings in the past.
> 
> As well as feedback on what meetings you think you will need, we of course 
> appreciate feedback on the plan itself.
> 
> cheers
> 
> Chaals, for the chairs
> 
> -- 
> Charles McCathie Nevile - web standards - CTO Office, Yandex
> cha...@yandex-team.ru - - - Find more at http://yandex.com
> 




[custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-22 Thread Ryosuke Niwa
Hi,

We propose to change the lifecycle callback to be fired both before invoking 
author scripts (e.g. for dispatching events) and before returning to author 
scripts.

Without this change, event listeners that call custom elements' methods would 
end up seeing inconsistent states during compound DOM operation such as 
Range.extractContents and editing operations, and we would like to avoid that 
as much as possible.

- R. Niwa




Apple's feedback for custom elements

2016-01-24 Thread Ryosuke Niwa


Hi all,

Here's WebKit team's feedback for custom elements.


== Constructor vs createdCallback ==
We would like to use constructor instead of created callback.

https://github.com/w3c/webcomponents/issues/139

At the meeting, we should discuss what happens when a constructor throws during 
parsing and inside various DOM functions.

Supporting upgrades with constructor also poses a question as to whether we 
would be using [[Call]] or [[Construct]] on the constructor.  In the case we're 
using [[Construct]], associating the right call to `super()` will be difficult. 
In the following example, there will be three calls to HTMLElement's 
constructor:

```
class MyElement extends HTMLElement {
  constructor(notEvil) {
 var otherElement = notEvil ? null : new MyElement(true);
 var thisElement = super('my-element');
 var anotherElement  = notEvil ? null : new MyElement(true);
 var r = Math.random();
 return (r > 0.33) ? thisElement : (r > 0.66 ? otherElement : 
anotherElement);
  }
}
```


== Symbol-named properties for lifecycle hooks ==
After thorough consideration, we no longer think using symbols for callback 
names is a good idea.  The problem of name conflicts with an existing library 
seems theoretical at best, and library and framework authors shouldn't be using 
names such as "attributeChanged" for other purposes than as for the designated 
purpose of custom elements API.

In addition, forcing authors write `[Element.attributeChanged]()` instead of 
`attributeChanged()` in this one API is inconsistent with the rest of Web API.


== Calling attributeChanged for all attributes on creation ==
We think invoking `attributeChanged` for each attribute during creation will 
help mitigating the difference between the upgrade case and a direct creation 
inside author script.

https://github.com/w3c/webcomponents/issues/364


== Lifecycle callback timing ==
We're fine with end-of-nano-task timing due to the implementation difficulty of 
going fully sync and async model doesn’t meet author’s expectation.


== Consistency problem ==
This is a problem but we think calling constructor before attributes and 
children are added during parsing is a good enough mitigation strategy.


== Attached/detached vs. inserted/removed hooks ==
Elements that define things or get used by other elements should probably do 
their work when they’re inserted into a document.  e.g. HTMLBaseElement needs 
to modify the base URL of a document when it gets inserted. To support this use 
case, we need callbacks when an element is inserted into a document/shadow-tree 
and removed from a document/shadow-tree.

Once we have added such insertedIntoDocument/removedFromDocument callbacks, 
attached/detached seems rather arbitrary and unnecessary as the author can 
easily check the existence of the browsing context via `document.defaultView`.

We would not like to add generic callbacks (inserted/removed) for every 
insertion and removal due to performance reasons.

https://github.com/w3c/webcomponents/issues/362


== Style attribute spamming ==
Since keeping the old value is inherently expensive, we think we should 
mitigate this issue by adding an attribute filtering.  We think having this 
callback is important because responding to attribute change was the primary 
motivation for keeping the end-of-a-nano-task timing for lifecycle callbacks.

https://github.com/w3c/webcomponents/issues/350


== childrenChanged callback ==
Given the consistency problem, it might be good idea to add `childrenChanged` 
callback to encourage authors to respond to this event instead of relying on 
children being present if we decided to go with non-synchronous 
construction/upgrading model.

On the other hand, many built-in elements that rely on children such as 
`textarea` tends to respond to all children at once.  So attaching mutation 
observer and doing the work lazily might be an acceptable workflow.


== Upgrading order ==
We should do top-down in both parsing and upgrading since parser needs to do it 
top-down.



== What happens when a custom element is adopted to another document? ==
Since a given custom element may not exist in a new document, retaining the 
prototype, etc... from the original document makes sense.  In addition, WebKit 
and Blink already exhibit this behavior so we don’t think it poses a major 
combat issue.


== Inheritance from subclasses of HTMLElement such as HTMLInputElement ==
We strongly oppose to adding this feature at least in v1.

https://github.com/w3c/webcomponents/issues/363


== Inheritance from SVGElement/MathMLElement ==
We don't want to allow custom SVG and MathML elements at least in v1.

https://github.com/w3c/webcomponents/issues/363


- R. Niwa



Re: [UIEvents] Keydown/keyup events during composition

2016-01-11 Thread Ryosuke Niwa

> On Jan 11, 2016, at 8:26 PM, Masayuki Nakano  wrote:
> 
> As far as I know, Gecko doesn't dispatch keydown nor keyup event for IME 
> unaware applications because JS changes something at keydown or keyup event 
> handler causes forcibly committing composition that may have caused IME 
> unavailable on such web pages.

I see. So not dispatching keydown/keyup during a composition was a mitigation 
strategy for this problem?

> However, there is an important thing is, which key value is proper value for 
> keyboard events during composition. For example, in Kana input mode of 
> Japanese IME, ASCII characters vs. Kana characters. Due to the platform 
> limitations, browsers cannot retrieve what character *will* be inputted into 
> the composition string.

Yeah, I don't think we'll be dispatching the final (composed) character's key 
code.

> At a meeting of UI Events (or D3E Events), we discussed this issue. IIRC, at 
> that time, we didn't find any proper value but browsers should *not* dispatch 
> keyboard events during composition (but allowing it for backward 
> compatibility for non-Gecko browsers, therefore, currently defined with MAY).

I don't think having such a big difference in IME behavior is desirable.  We 
should somehow find a way to interoperate in this regard.

Given the compatibility risk of not firing key up/down in all non-Gecko 
browsers is quite high, firing it in Gecko seemed like the best way forward.  
Having said that, I do see your rationale for not firing key up/down during 
composition.

> I believe that web authors shouldn't handle keyboard events for text input. 
> I.e., shouldn't need keyboard events during composition because such 
> applications cannot support handwriting system and/or speech input system. At 
> least spec should recommend web authors should handle "compositionupdate", 
> "beforeinput" and/or "input". Handling keydown/keyup event during composition 
> means that a keydown/keyup event causes double action (default action, i.e., 
> modifying composition string and web app specific action).

Makes sense to me. If authors are trying to detect "normal" key downs for 
arrows keys, etc... then they should not be listening to keydowns fired during 
composition.  If, on the other hand, they care about composition, then they 
should probably be listening to compositionupdate event instead.

I'll go talk with my colleagues and get back to you.

- R. Niwa




Re: time at TPAC other than Wednesday?

2016-01-09 Thread Ryosuke Niwa

> On Jan 9, 2016, at 12:20 PM, Ryosuke Niwa <rn...@apple.com> wrote:
> 
> 
>> On Jan 8, 2016, at 7:12 PM, Johannes Wilm <johan...@fiduswriter.org> wrote:
>> 
>> On Sat, Jan 9, 2016 at 3:49 AM, Grisha Lyukshin <gl...@microsoft.com> wrote:
>>> Hello Johannes,
>>> 
>>> I was the one to organize the meeting. To make things clear, this was an ad 
>>> hoc meeting with the intent for the browsers to resolve any ambiguities and 
>>> questions on beforeInput spec, which we did. This was the reason I invited 
>>> representatives from each browser only.
>>> 
>> 
>> In so far as to clarify the questions you had at the last meeting that you 
>> needed to resolve with your individual teams, that you had indeed announced 
>> at the meeting that you would talk about --- I think that is fair enough. 
>> 
>> I am not 100% familiar with all processes of the W3C, but from what I can 
>> tell, I don't think you can treat it as having been a F2F meeting of this 
>> taskforce, but you can say that you had some informal talks with your and 
>> the other teams about this and now you come back to the taskforce with a 
>> proposal of how to resolve it.
>> 
>> Similarly, among JS editor developers we have been discussing informally 
>> about priorities and how we would like things to work. But those are 
>> informal meetings that cannot override the taskforce meetings.
> 
> Nobody said our F2F was of the task force.
> 
> Let me be blunt and say this.  I don't remember who nominated you to be the 
> editor of all these documents and who approved it.  If you want to talk about 
> the process, I'd like to start from there.
> 
>>> To your question about removing ContentEditable=”true”. The idea is 
>>> consolidate multiple documents into a single editing specification 
>>> document. We wanted to remove ContentEditable=”true” because it had no 
>>> content there. So resolutions on CE=true from previous meetings remain 
>>> unchanged. There is no point on having empty document floating on the web. 
>>> So yeah, we wanted to remove the draft that has no content. We will merge 
>>> Input Events and other ContentEditable specs into a single spec. We didn’t 
>>> really have any discussions on execCommands spec.
>>> 
>> 
>> Yes, I don't think that part can reasonably be said to have been part of 
>> something you could resolve in a closed door, unannounced meeting among only 
>> browser vendors. 
>> 
>> Both the treatment of the various documents and especially 
>> contentEditable=True has been very controversial in this taskforce in the 
>> past, and I don't think you can just set aside all processes and consensus 
>> methods to change this.
>> 
>> So with all due respect, I don't think you can just delete it like that. 
>> Just as I cannot just delete part of the UI events spec because I have had a 
>> meeting with some people from TinyMCE and CKEditor and we decided we didn't 
>> like that part.
> 
> If the task force comes to a consensus that the document was useful, then we 
> can just restore it.  The change was purely editorial in the nature.  First 
> off, I don't remember when we agreed that we needed to have a separate spec 
> for contenteditable=true separate from Aryeh's document.  If you thought the 
> consensus of the last Paris F2F was to do that, then you either misunderstood 
> the meeting's conclusion or I didn't object in time.
> 
> As far as I'm concerned, this is about removing an empty document the task 
> force never agreed to add in the first place.

Now I realize my Github commit message was very misleading from your 
perspective.  I apologize for causing the confusion.  Nonetheless, we don't 
need a separate contenteditable=true document since that's clearly defined in 
the HTML5 spec as well as Aryeh's spec.

- R. Niwa




Re: time at TPAC other than Wednesday?

2016-01-09 Thread Ryosuke Niwa

> On Jan 9, 2016, at 6:18 AM, Florian Rivoal  wrote:
> 
>> On Jan 9, 2016, at 11:49, Grisha Lyukshin  wrote:
>> 
>> Hello Johannes,
>> 
>> I was the one to organize the meeting. To make things clear, this was an ad 
>> hoc meeting with the intent for the browsers to resolve any ambiguities and 
>> questions on beforeInput spec, which we did. This was the reason I invited 
>> representatives from each browser only.
> 
> Hello,
> 
> Thanks for providing some light on this. However, I must say I am extremely 
> surprised, to say the least.
> 
> Informal discussions between anybody is of course fine. But informal 
> discussions are about discussing, not deciding.

Nobody decided anything there.

> However, as far as I gather, this is a meeting where resolutions were made, 
> issues were resolved, a document of was deleted by someone who is not its 
> editor, despite the editor (who was not invited to the meeting) protesting...

As I explained, the document I removed had no content other than the one 
already present in the HTML5 spec as well as Aryeh's contenteditable spec.  
Furthermore, I don't recall the task force deciding to publish this document in 
the first place.

If anything, I was astounded by the fact this document existed at all, and was 
published as if it were the consensus of the task force.  So the removal was 
purely editorial / administrative in nature.

This is precisely why I haven't made changes or send PRs for other things we 
discussed because that would require the discussion in the task force per the 
W3C process.

If the rest of the participants disagree with that, then I can revert the 
change and bring the document back.

> Forgive me if I'm over reacting, but this doesn't sound like "consensus and 
> due process" to me.

There was no consensus of any sort since this was not a F2F of the task force.  
Removal of the document was NOTHING to do with us deciding anything in the 
closed doors.  I agree my commit message was extremely misleading and I didn't 
provide adequate context in which this was discussed so I apologize for that.

- R. Niwa




Re: time at TPAC other than Wednesday?

2016-01-09 Thread Ryosuke Niwa

> On Jan 8, 2016, at 7:12 PM, Johannes Wilm  wrote:
> 
> On Sat, Jan 9, 2016 at 3:49 AM, Grisha Lyukshin  wrote:
>> Hello Johannes,
>> 
>>  I was the one to organize the meeting. To make things clear, this was an ad 
>> hoc meeting with the intent for the browsers to resolve any ambiguities and 
>> questions on beforeInput spec, which we did. This was the reason I invited 
>> representatives from each browser only.
>> 
> 
> In so far as to clarify the questions you had at the last meeting that you 
> needed to resolve with your individual teams, that you had indeed announced 
> at the meeting that you would talk about --- I think that is fair enough. 
> 
> I am not 100% familiar with all processes of the W3C, but from what I can 
> tell, I don't think you can treat it as having been a F2F meeting of this 
> taskforce, but you can say that you had some informal talks with your and the 
> other teams about this and now you come back to the taskforce with a proposal 
> of how to resolve it.
> 
> Similarly, among JS editor developers we have been discussing informally 
> about priorities and how we would like things to work. But those are informal 
> meetings that cannot override the taskforce meetings.

Nobody said our F2F was of the task force.

Let me be blunt and say this.  I don't remember who nominated you to be the 
editor of all these documents and who approved it.  If you want to talk about 
the process, I'd like to start from there.

>> To your question about removing ContentEditable=”true”. The idea is 
>> consolidate multiple documents into a single editing specification document. 
>> We wanted to remove ContentEditable=”true” because it had no content there. 
>> So resolutions on CE=true from previous meetings remain unchanged. There is 
>> no point on having empty document floating on the web. So yeah, we wanted to 
>> remove the draft that has no content. We will merge Input Events and other 
>> ContentEditable specs into a single spec. We didn’t really have any 
>> discussions on execCommands spec.
>> 
> 
> Yes, I don't think that part can reasonably be said to have been part of 
> something you could resolve in a closed door, unannounced meeting among only 
> browser vendors. 
> 
> Both the treatment of the various documents and especially 
> contentEditable=True has been very controversial in this taskforce in the 
> past, and I don't think you can just set aside all processes and consensus 
> methods to change this.
> 
> So with all due respect, I don't think you can just delete it like that. Just 
> as I cannot just delete part of the UI events spec because I have had a 
> meeting with some people from TinyMCE and CKEditor and we decided we didn't 
> like that part.

If the task force comes to a consensus that the document was useful, then we 
can just restore it.  The change was purely editorial in the nature.  First 
off, I don't remember when we agreed that we needed to have a separate spec for 
contenteditable=true separate from Aryeh's document.  If you thought the 
consensus of the last Paris F2F was to do that, then you either misunderstood 
the meeting's conclusion or I didn't object in time.

As far as I'm concerned, this is about removing an empty document the task 
force never agreed to add in the first place.

- R. Niwa




[UIEvents] [Editing] Ordering of composition events and beforeinput

2016-01-09 Thread Ryosuke Niwa
Hi,

This is a feedback from multiple browser vendors (Apple, Google, Microsoft) 
that got together in Redmond last Thursday to discuss editing API and related 
events.

First off, we found out that there are behavior inconsistencies between 
browsers with respect to composition events.

WebKit, Blink, and Gecko all fire `compositionupdate` events before mutating 
DOM whereas Edge and Trident both requires the event after mutating DOM.  We 
think UIEvent spec should be updated to explicitly make the majority behavior 
(firing the event before mutation DOM) standard.

Since this is a ricky behavioral change for Edge/Trident, we think it's better 
to fire `before input` event prior to firing `compositionupdate` instead of the 
other way around was defined in:
http://w3c.github.io/editing/input-events.html#events-inputevent-event-order

We also found that WebKit and Blink both mutate DOM without firing 
`compositionupdate` at the end of composition on Mac.  Only `compositionend` is 
fired.  We think this is a bug in WebKit/Blink especially since Blink fires 
`compositionupdate` before `compositionend` on Windows.  Therefore, we suggest 
that WebKit/Blink fix this bug, and standardize the behavior whereby which 
`compositionupdate` is fired before mutating DOM prior to firing 
`compositionend`.

Furthermore, the above change to always fire `compositionupdate` eliminates the 
necessity for firing `beforeinput` event prior to firing `compositionend` so we 
suggest we remove this from the input event spec:
http://w3c.github.io/editing/input-events.html#events-inputevent-event-order

That is, we only fire `beforeinput` event before and only before firing 
`compositionupdate`, which shall be fired before every DOM mutation initialized 
by input methods.

- R. Niwa




[UIEvents] Firing composition events for dead keys

2016-01-09 Thread Ryosuke Niwa
Hi all,

This is another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


We found out that all major browsers (Chrome, Firefox, and Safari) fire 
composition events for dead keys on Mac but they don't on Windows.  I think 
this difference comes from the underlying platform's difference but we think we 
should standardize it to always fire composition events for consistent behavior 
across platforms.

Does anyone know of any implementation limitation to do this?  Or are there any 
reason we should not fire composition events for dead keys on Windows?

- R. Niwa




[UIEvents] [Editing] Moving input/beforeinput events into UI events

2016-01-09 Thread Ryosuke Niwa
Hi all,

This is another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


As we discussed various aspects of composition events and beforeinput/input 
events, it became apparent that we want both composition events and 
beforeinput/input events to be defined in a single spec since the ordering of 
these events as well as what happens when their default actions are canceled or 
event propagations are stopped are interdependent among them.

Thus, we think it's better to define the basic event interface as well as the 
timing at which these events are fired in UI events, and define `editType` and 
associated default editing actions in a separate editing spec.

That is, we suggest merging everything in:
http://w3c.github.io/editing/input-events.html
into
https://w3c.github.io/uievents/

except the parts that define the list of values for `editType` and `data` 
(since the content of `data` depends on the type of `editType`), and define the 
latter two in the editing spec that defines these properties' values as well as 
their behaviors:
http://w3c.github.io/editing/contentEditable.html


- R. Niwa




[UIEvents] Keydown/keyup events during composition

2016-01-09 Thread Ryosuke Niwa
Hi all,

This is another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


We've been informed that Gecko/Firefox does not fire keydown/keyup events 
during input method composition for each key stroke.  Could someone from 
Mozilla clarify why this is desirable behavior?

We think it's better to fire keydown/keyup events for consistency across 
browsers.  If anything authors can detect that a given keydown/keyup event is 
associated with input methods by listening to composition events as well.

- R. Niwa




[Editing] Adding `dataTransfer` to `InputEvent` interface

2016-01-09 Thread Ryosuke Niwa
Hi,

This is yet another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


It came to our attention that `beforeinput` event fired for paste would need to 
expose HTML (or images, etc...) instead of plain text.  To expose that data, we 
suggest to add `dataTransfer` property on the `InputEvent` interface to expose 
these non-plaintext contents.

As suggested in
https://lists.w3.org/Archives/Public/public-webapps/2016JanMar/0025.html

we suggest doing so in:
http://w3c.github.io/editing/contentEditable.html
as a partial interface definition for `InputEvent` which defines `data`, 
`dataTransfer`, and `editType` IDL attributes.

- R. Niwa




[Editing] [DOM] Adding static range API

2016-01-09 Thread Ryosuke Niwa
Hi,

This is yet another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.

For editing APIs, it's desirable to have a variant of Range that is immutable.  
For example, if apps were to create an undo stack of its own, then storing the 
selection state using Range would be problematic because those Ranges would get 
updated whenever DOM is mutated.  Furthermore, live ranges are expensive if 
browsers had to keep updating them as DOM is mutated.  This is analogous to how 
we're moving away form LiveNodeList/HTMLCollection to use StaticNodeList in 
various new DOM APIs.

So we came up with a proposal to add StaticRange: a static, immutable variant 
of Range defined as follows:

[Constructor,
 Exposed=Window]
interface StaticRange {
  readonly attribute Node startContainer;
  readonly attribute unsigned long startOffset;
  readonly attribute Node endContainer;
  readonly attribute unsigned long endOffset;
  readonly attribute boolean collapsed;
  readonly attribute Node commonAncestorContainer;

  const unsigned short START_TO_START = 0;
  const unsigned short START_TO_END = 1;
  const unsigned short END_TO_END = 2;
  const unsigned short END_TO_START = 3;
  short compareBoundaryPoints(unsigned short how, Range sourceRange);

  [NewObject] 
Range cloneRange();

  boolean isPointInRange(Node node, unsigned long offset);
  short comparePoint(Node node, unsigned long offset);

  boolean intersectsNode(Node node);
};

Along with range extensions from CSS OM view also added as follows:
https://drafts.csswg.org/cssom-view/#extensions-to-the-range-interface

partial interface StaticRange
 {
  [NewObject] sequence getClientRects();
  [NewObject] DOMRect getBoundingClientRect();
};

with one difference, which is to throw an exception (perhaps 
InvalidStateError?) when StaticRange's boundary points don't share a common 
ancestor, not in a document, or offsets are out of bounds.

- R. Niwa




Re: [UIEvents] Firing composition events for dead keys

2016-01-09 Thread Ryosuke Niwa

> On Jan 9, 2016, at 6:33 PM, Olli Pettay <o...@pettay.fi> wrote:
> 
> On 01/10/2016 01:14 AM, Ryosuke Niwa wrote:
>> Hi all,
>> 
>> This is another feedback from multiple browser vendors (Apple, Google, 
>> Microsoft) that got together in Redmond last Thursday to discuss editing API 
>> and related events.
>> 
>> 
>> We found out that all major browsers (Chrome, Firefox, and Safari) fire 
>> composition events for dead keys on Mac but they don't on Windows.  I think 
>> this difference comes from the underlying platform's difference but we think 
>> we should standardize it to always fire composition events for consistent 
>> behavior across platforms.
>> 
>> Does anyone know of any implementation limitation to do this?  Or are there 
>> any reason we should not fire composition events for dead keys on Windows?
>> 
> 
> Does anyone know the behavior on Linux.
> 
> What is the exact case you're talking about here? do you have a test case?

Sure. On Mac, you can enable International English keyboard and type ' key and 
then u.

On Mac:
1. Pressing ' key inserts ' (character) and fires `compositionstart` event.
2. Pressing u key replaces ' with ú and fires `compositionend`.


On Windows, dead key doesn't insert any character at all, and pressing the 
second key insert the composed character.

Looking at MSDN:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms646267(v=vs.85).aspx#_win32_Dead_Character_Messages

dead key should issue WM_KEYDOWN as well as WM_DEADCHAR in TranslateMessage so 
I don't think there is an inherent platform limitation to fire composition 
events.

- R. Niwa




Re: [Editing] [DOM] Adding static range API

2016-01-09 Thread Ryosuke Niwa

> On Jan 9, 2016, at 6:25 PM, Olli Pettay  wrote:
> 
> Hard to judge this proposal before seeing an API using StaticRange objects.
> 
> One thing though, if apps were to create an undo stack of their own, they 
> could easily have their own Range-like API implemented in JS. So if that is 
> the only use case, probably not worth to add anything to make the platform 
> more complicated. Especially since StaticRange API might be good for some 
> script library, but not for some other.

The idea is to return this new StaticRange object in `InputEvent` interface's 
`targetRanges` IDL attribute, and let Web apps hold onto them without incurring 
the cost of constant updates.

- R. Niwa




[selection-api] onselectstart/onselectionchange attributes

2016-01-09 Thread Ryosuke Niwa
Hi,

It looks like browsers don't agree on where `onselectstart` and 
`onselectionchange` IDL attributes should be defined:
https://github.com/w3c/selection-api/issues/54
https://github.com/w3c/selection-api/issues/60

In particular, Blink/WebKit/Trident all defines onselectstart/onselectionchange 
on Document, not GlobalEventHandlers.  Gecko defines them on both.

Both Blink and WebKit supports onselectionchange content attribute on body 
element (as far as I could recall, I implemented onselectionchange that way to 
match Trident's behavior at the time in http://trac.webkit.org/changeset/79208).

I think the cleanest approach is to add it to GlobalEventHandlers to be 
consistent with other events.  What do you guys think?

- R. Niwa




Re: time at TPAC other than Wednesday?

2016-01-09 Thread Ryosuke Niwa

> On Jan 9, 2016, at 4:11 PM, Chaals McCathie Nevile <cha...@yandex-team.ru> 
> wrote:
> 
> On Sat, 09 Jan 2016 23:20:27 +0300, Ryosuke Niwa <rn...@apple.com> wrote:
> 
>> 
>>> On Jan 8, 2016, at 7:12 PM, Johannes Wilm <johan...@fiduswriter.org> wrote:
>>> 
>>> On Sat, Jan 9, 2016 at 3:49 AM, Grisha Lyukshin <gl...@microsoft.com> wrote:
>>>> Hello Johannes,
>>>> 
>>>> I was the one to organize the meeting. To make things clear, this was an 
>>>> ad hoc meeting with the intent for the browsers to resolve any ambiguities 
>>>> and questions on beforeInput spec, which we did. This was the reason I 
>>>> invited representatives from each browser only.
>>>> 
>>> 
>>> In so far as to clarify the questions you had at the last meeting that you 
>>> needed to resolve with your individual teams, that you had indeed announced 
>>> at the meeting that you would talk about --- I think that is fair enough.
>>> 
>>> I am not 100% familiar with all processes of the W3C, but from what I can 
>>> tell, I don't think you can treat it as having been a F2F meeting of this 
>>> taskforce, but you can say that you had some informal talks with your and 
>>> the other teams about this and now you come back to the taskforce with a 
>>> proposal of how to resolve it.
>>> 
>>> Similarly, among JS editor developers we have been discussing informally 
>>> about priorities and how we would like things to work. But those are 
>>> informal meetings that cannot override the taskforce meetings.
>> 
>> Nobody said our F2F was of the task force.
>> 
>> Let me be blunt and say this.  I don't remember who nominated you to be the 
>> editor of all these documents and who approved it.  If you want to talk 
>> about the process, I'd like to start from there.
> 
> The chairs did. As per the W3C Process.

Huh, so chairs can appoint anyone as an editor and then that editor can 
introduce whatever document he/she pleases.

That must be some sort of a black joke.  I don't even know what the point of 
participating in any W3C standarization process is if chairs and editors can do 
whatever they please to do like that.

- R. Niwa




Re: Apple will host Re: Custom Elements meeting will be 25th Jan (not 29th)

2016-01-06 Thread Ryosuke Niwa

> On Jan 6, 2016, at 12:05 AM, Takayoshi Kochi (河内 隆仁) <ko...@google.com> wrote:
> 
> Is there any option to attend this remotely (telcon or video conference)?
> 
> 2015年12月9日(水) 10:26 Ryosuke Niwa <rn...@apple.com>:
>> 
>> > On Dec 8, 2015, at 2:55 AM, Chaals McCathie Nevile <cha...@yandex-team.ru> 
>> > wrote:
>> >
>> > On Mon, 07 Dec 2015 13:39:25 +1000, Chaals McCathie Nevile 
>> > <cha...@yandex-team.ru> wrote:
>> >
>> >> we are trying to shift the date of the Custom Elements meeting to *25* 
>> >> Jan, from the previously proposed date of 29th.
>> >>
>> >> We are currently looking for a host in the Bay area - offers gratefully 
>> >> received.
>> >
>> > Apple have kindly agreed to host the meeting, so it will be at 1 Infinite 
>> > Loop, Cupertino. I'll update the page shortly with logistics information.
>> >
>> > Note that if you are driving, you should allow an extra 10 minutes or so 
>> > for parking. Carpool!
>> 
>> Added logistics on
>> https://github.com/w3c/WebPlatformWG/blob/gh-pages/meetings/25janWC.md

The conference room has a video/telephone conference capability so we should be 
able to set up Webinars assuming someone from W3C can help us set it up.

- R. Niwa




Re: Apple will host Re: Custom Elements meeting will be 25th Jan (not 29th)

2015-12-08 Thread Ryosuke Niwa

> On Dec 8, 2015, at 2:55 AM, Chaals McCathie Nevile  
> wrote:
> 
> On Mon, 07 Dec 2015 13:39:25 +1000, Chaals McCathie Nevile 
>  wrote:
> 
>> we are trying to shift the date of the Custom Elements meeting to *25* Jan, 
>> from the previously proposed date of 29th.
>> 
>> We are currently looking for a host in the Bay area - offers gratefully 
>> received.
> 
> Apple have kindly agreed to host the meeting, so it will be at 1 Infinite 
> Loop, Cupertino. I'll update the page shortly with logistics information.
> 
> Note that if you are driving, you should allow an extra 10 minutes or so for 
> parking. Carpool!

Added logistics on
https://github.com/w3c/WebPlatformWG/blob/gh-pages/meetings/25janWC.md

- R. Niwa




Re: [web components] proposed meetings: 15 dec / 29 jan

2015-11-13 Thread Ryosuke Niwa

> On Nov 13, 2015, at 8:08 AM, Anne van Kesteren <ann...@annevk.nl> wrote:
> 
> On Fri, Nov 13, 2015 at 4:57 PM, Ryosuke Niwa <rn...@apple.com> wrote:
>> What outstanding problems are you thinking of?
> 
> Again, not I, but Hayato Ito raised these. I just happen to agree. He
> emailed this list on November 2:
> 
>  https://lists.w3.org/Archives/Public/public-webapps/2015OctDec/0149.html

Of the four issues listed there:

> On Nov 1, 2015, at 7:52 PM, Hayato Ito <hay...@chromium.org> wrote:
> 
> > 1. Clarify focus navigation
> > 2. Clarify selection behavior (at least make it interoperable to JS)
> > 3. Decide on the style cascading order
> > 4. style inheritance (https://github.com/w3c/webcomponents/issues/314)

I'd like to resolve 3 and 4 ASAP since we're pretty close.  I don't think we 
necessarily need an in-person meeting to do that though.  If anything, it's 
probably more helpful to have discussion over mailing lists so that each person 
can spend as much time as needed to understand / come up with examples and 
proposals.

For 1, we had a rough consensus on how current focus model (in particular focus 
ordering) should be changed to support shadow DOM during TPAC.  Namely, we 
would create a new tab index "scope" at both shadow roots as well as slots, and 
follow the composed tree order for tab ordering.  I think someone should 
document that first.

I just started working on 2 but I probably won't have a time to come up with a 
proposal until mid December.


So if you guys can't make it, I don't think we necessarily need a shadow DOM 
meeting in December.

- R. Niwa




Re: [web components] proposed meetings: 15 dec / 29 jan

2015-11-13 Thread Ryosuke Niwa

> On Nov 13, 2015, at 3:07 AM, Anne van Kesteren  wrote:
> 
> On Fri, Nov 13, 2015 at 11:32 AM, Chaals McCathie Nevile
>  wrote:
>> Our proposal is to look for a host on 15 December on the West Coast, for a
>> meeting primarily focused on Shadow DOM, and another on 29 January in the
>> Bay area for one around Custom Elements. The agenda can be adjusted to take
>> account of people who are unable to travel for both of these, moving items
>> from one to the other if necessary, since *many* but *not all* people are
>> interested in both topics.
> 
> As mentioned before, I and others from Mozilla not in the US are
> unlikely to make December 15. There's a big Mozilla event the entire
> week before and I doubt folks want to keep trucking. But given how far
> along Shadow DOM is perhaps attendance is not important. After all,
> Hayato Ito stressed in an earlier email that to make the meeting
> worthwhile we'd need some new proposals to address the quite hard
> outstanding problems and I haven't seen any of those thus far.

What outstanding problems are you thinking of?

- R. Niwa




Re: [Web Components] proposing a f2f...

2015-10-29 Thread Ryosuke Niwa
> 
> On Oct 29, 2015, at 9:47 AM, Chris Wilson  wrote:
> 
> Or host in Seattle.  :)
> 
> On Thu, Oct 29, 2015 at 9:20 AM, Travis Leithead 
>  wrote:
>> I would prefer a late January date so as to allow me to arrange travel. 
>> Otherwise, I’m happy to attend remotely anytime.

I'm okay with either option with a slight preference on having it earlier since 
we didn't have much time discussing custom elements at TPAC.

I would like to resolve the following issues for shadow DOM:
1. Clarify focus navigation
2. Clarify selection behavior (at least make it interoperable to JS)
3. Decide on the style cascading order

And the following issues for custom elements:
4. Construction model. Do we do sync / almost-sync / dance?
5. Do we support upgrading?  If we do, how?

Any other issues?

- R. Niwa




Re: Custom elements backing swap proposal

2015-10-24 Thread Ryosuke Niwa

> On Oct 24, 2015, at 9:55 AM, Elliott Sprehn  wrote:
> 
> I've been thinking about ways to make custom elements violate the consistency 
> principle less often and had a pretty awesome idea recently. 
> 
> Unfortunately I won't be at TPAC, but I'd like to discuss this idea in 
> person. Can we setup a custom element discussion later in the year?

Certainly. I won't be back in the states until 11/8 and both Blink and WebKit 
are having their contributor's meetings in the following week so how about the 
week of November 16th?

If not, the first and the second weeks of December also works.

> The current  "synchronous in the parser" model doesn’t feel good to me 
> because cloneNode() and upgrades are still async,

I've been making a prototype of custom elements API in which cloneNode and all 
other internal construction of elements are sync so only upgrading is async in 
our design.

> and I fear other things (ex. parser in editing, innerHTML) may need to be as 
> well.

Not in our design.

> So, while we've covered up the inconsistent state of the element in one place 
> (parser), we've left it to be a surprise in the others which seems worse than 
> just always being async. This led me to a crazy idea that would get us 
> consistency between all these cases:
> 
> What if we use a different pimpl object (C++ implementation object) when 
> running the constructor, and then move the children/shadows and attributes 
> after the fact? This element can be reused (as implementation detail), and 
> any attempt to append it to another element would throw.

This is not the first time this (or a similar) idea has been brought up.

The problem with this approach is that authors can still find the old 
non-temporary "pimpl" (where did this term come from?) via querySelectorAll, 
getElementsByTagName, etc... because it's still in the document. (If it's not 
in the document, then we would have the iframe reloading problem)

And when the constructors or unrelated code invoked by the constructor access 
the old "pimpl", we have two options:

1. Use the same wrapper as the one we're creating.  This is weird because the 
element as perceived by other DOM APIs such as querySelectorAll and the actual 
object behaves differently.

2. Create a new wrapper. But what are we going to do this with new wrapper when 
we swap back "pimpl" after calling the constructor?  Also, this would mean that 
the identify of elements will change.

There is yet another alternative: make all other DOM APIs behave as if these 
pre-construction custom elements don't exist. However, that is a much tricker 
thing to implement (especially if you didn't want to worsen the performance) 
than making all construction sync at least in WebKit.

- R. Niwa




Shadow DOM and SVG use elements

2015-10-22 Thread Ryosuke Niwa
Hi all,

What should happen when a SVG use element references an element (or its 
ancestor) with a shadow root?

Should the use element show the composed tree underneath it or ignore shadow 
DOM altogether?

I'm a little inclined towards the former (uses the composed tree).

- R. Niwa


Re: TPAC Topic: Using ES2015 Modules in HTML

2015-10-16 Thread Ryosuke Niwa

> On Oct 16, 2015, at 2:45 AM, Anne van Kesteren <ann...@annevk.nl> wrote:
> 
> On Fri, Oct 16, 2015 at 5:39 AM, Ryosuke Niwa <rn...@apple.com> wrote:
>> Can we discuss how we can integrate ES2015 modules into HTML on Tuesday,
>> October 27th at TPAC?
> 
> Pretty sure Tuesday has been reserved for service workers discussion.

The wiki page says there will be a parallel meeting all day on Tuesday:
https://www.w3.org/wiki/index.php?title=Webapps/October2015Meeting 
<https://www.w3.org/wiki/index.php?title=Webapps/October2015Meeting>

CSS / Web Platform joint meeting is definitely happening at 3pm on Tuesday
so service worker meeting needs to finish before that in order not to overlap.

Regardless, we still prefer discussing ES2015 module integration on Tuesday 
morning.

- R. Niwa



TPAC Topic: Using ES2015 Modules in HTML

2015-10-15 Thread Ryosuke Niwa
Hi all,

Can we discuss how we can integrate ES2015 modules into HTML on Tuesday, 
October 27th at TPAC?

Both Gecko and WebKit are basically done implementing ES6 module supports in 
their respective JavaScript engines but blocked on 
http://whatwg.github.io/loader/  to support in 
web contents.

Since my colleague who is interested in this topic cannot attend TPAC in 
person, it would be great if we could have it in Tuesday Morning (Monday 
evening in California).

- R. Niwa



Re: Proposal: CSS WG / WebApps Joint Meeting for Shadow DOM Styling

2015-09-30 Thread Ryosuke Niwa

> On Sep 29, 2015, at 8:19 AM, Alan Stearns <stea...@adobe.com> wrote:
> 
> On 9/28/15, 4:49 PM, "rn...@apple.com on behalf of Ryosuke Niwa" 
> <rn...@apple.com> wrote:
> 
> Chaals, Art,
> 
> Do you have a time preference for this? We’ve got one vote for Tuesday 
> afternoon, but I think Monday afternoon could work as well.
> 
> I see that the WebApps WG has a larger person-estimate on the schedule page - 
> if that translates to a larger room, the CSSWG people could troop over to 
> you. Or we could host interested people in the CSSWG room. What do you think?
> 
> Ryosuke,
> 
> We now have “Shadow DOM Styling” on the CSSWG agenda for TPAC [1]. It would 
> help if you could add more detail on the particular issues you’d like ironed 
> out.

Thanks!  I've added tentative agenda on 
https://www.w3.org/wiki/Webapps/October2015Meeting#Potential_Agenda_Topics 
<https://www.w3.org/wiki/Webapps/October2015Meeting#Potential_Agenda_Topics> 
since I don't seem to have edit permissions on 
https://wiki.csswg.org/planning/tpac-2015

- R. Niwa



Proposal: CSS WG / WebApps Joint Meeting for Shadow DOM Styling

2015-09-28 Thread Ryosuke Niwa
Hi,

Attending the recent meeting for shadow DOM styling [1] convinced me to join 
CSS WG, and further that we need a joint meeting between CSS WG and WebApps WG 
on this topic during TPAC to iron out the details.

Can we have a joint meeting (of one or two hours) on Monday (10/26) or Tuesday 
(10/27) for this?

[1] https://www.w3.org/wiki/Webapps/WebComponentsSeptember2015Meeting

- R. Niwa




Re: Tests for new shadow DOM API

2015-09-03 Thread Ryosuke Niwa
I think many of them are still relevant.  The key problem I have at the moment 
is that I can't tell which ones are relevant and which ones aren't.  So I 
wanted to create a new directory and migrate or delete the existing tests over 
time.

> On Sep 3, 2015, at 1:19 PM, Travis Leithead  
> wrote:
> 
> Why not deprecate/remove the existing tests in the current folder structure? 
> Presumably we can replace them with new tests that are aligned with the 
> recent spec changes?
>  
> If the existing tests really aren’t relevant anymore, I don’t see a reason to 
> keep them around.
>  
> From: rn...@apple.com [mailto:rn...@apple.com] 
> Sent: Thursday, September 3, 2015 1:07 PM
> To: public-webapps 
> Subject: Tests for new shadow DOM API
>  
> Hi all,
>  
> Where should we put tests for new shadow DOM API?  It looks like the tests in 
> https://github.com/w3c/web-platform-tests/tree/master/shadow-dom/shadow-trees 
> 
>  are mostly obsolete and I'm not certain how many of them could be adopted 
> for the new API.
>  
> Would it make sense to rename this old one to "deprecated-shadow-don" and 
> then create a new top-level directory "shadow-dom"?  We can then migrate or 
> write new tests there.
>  
> - R. Niwa
>  



Re: Shadow DOM spec for v1 is ready to be reviewed

2015-09-01 Thread Ryosuke Niwa
Thanks for the update!

> On Aug 27, 2015, at 11:33 PM, Hayato Ito  wrote:
> 
> Let me post a quick update for the Shadow DOM spec: 
> https://w3c.github.io/webcomponents/spec/shadow/ 
> 
> 
> I've almost done the spec work for Shadow DOM v1. I think it's time to be 
> reviewed and get feedback. I hope that a browser vendor, including me, can 
> start to implement it based on the current spec.
> 
> You might want to use https://github.com/w3c/webcomponents/issues/289 
> 
>  to give me feedback. Please feel free to file a new issue if preferred.

I've filed a bunch of editorial issues.

One conceptual problem I have with the current spec is how it "unwrap" nested 
slots.  I thought we had a consensus not to do this at F2F?  
https://github.com/w3c/webcomponents/issues/308 
 tracks this particular issue.

- R. Niwa



Re: PSA: publish WD of "WebIDL Level 1"

2015-09-01 Thread Ryosuke Niwa

> On Sep 1, 2015, at 7:27 AM, Anne van Kesteren <ann...@annevk.nl> wrote:
> 
> On Tue, Sep 1, 2015 at 4:23 PM, Ryosuke Niwa <rn...@apple.com> wrote:
>> I think you’re missing the point.  The point of these documentation is to 
>> know exactly what the patch author was looking at the time he wrote the 
>> patch.  If there was a typo in the spec, that’s an important information.
>> 
>> As for diff’ing what has changed, that’s exactly the use case.  In order to 
>> know what has changed, you need to know what the old spec was.  The living 
>> standard is a total nightmare as far as I’m concerned.
> 
> I guess it depends on your workflow. In any event, does what Domenic
> suggests and has implemented for 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__streams.spec.whatwg.org_=BQIFaQ=eEvniauFctOgLOKGJOplqw=051zrPFUkWGMfGlKdhy9Uw=dmgoI6lEFOWkspvfnnlq-RxpY4jfiiozATV5kFegnFY=UcHS2fRzjPxSXuZZ0gQQs5xb3C-Ct3peVtowrZdxoUE=
>  
> address your concern?

Yes!  It totally does.

- R. Niwa




Re: PSA: publish WD of "WebIDL Level 1"

2015-09-01 Thread Ryosuke Niwa

> On Aug 31, 2015, at 8:51 PM, Anne van Kesteren <ann...@annevk.nl> wrote:
> 
> On Tue, Sep 1, 2015 at 2:33 AM, Ryosuke Niwa <rn...@apple.com> wrote:
>> Let's say we implement some feature based on Web IDL published as of today.  
>> I'm going to refer that in my source code commit message.  Future readers of 
>> my code has no idea what I was implementing when they look at my commit 
>> message in five years if it refers to the living standard that changes over 
>> time.
> 
> Apart from what Domenic said, IDs should remain stable over time and
> other than features getting expanded, they need to remain backwards
> compatible, just as your code base. (It also seems like useful
> information to know what you've implemented has been refactored or
> changed in some way in the corresponding standard, so you can take
> steps to update your code.)

I think you’re missing the point.  The point of these documentation is to know 
exactly what the patch author was looking at the time he wrote the patch.  If 
there was a typo in the spec, that’s an important information.

As for diff’ing what has changed, that’s exactly the use case.  In order to 
know what has changed, you need to know what the old spec was.  The living 
standard is a total nightmare as far as I’m concerned.

- R. Niwa




Re: PSA: publish WD of "WebIDL Level 1"

2015-08-31 Thread Ryosuke Niwa

> On Aug 7, 2015, at 9:27 AM, Anne van Kesteren  wrote:
> 
> On Fri, Aug 7, 2015 at 6:23 PM, Travis Leithead
>  wrote:
>> This is, at a minimum, incremental goodness. It's better than leaving the 
>> prior L1 published document around--which already tripped up a few folks on 
>> my team recently. I strongly +1 it.
> 
> If your team looks at the newer L1 they will also trip themselves up.
> Anything but https://heycam.github.io/webidl/ is problematic.

For our internal documentation purposes, I'd refer having a perm link to a 
document that never changes.

Let's say we implement some feature based on Web IDL published as of today.  
I'm going to refer that in my source code commit message.  Future readers of my 
code has no idea what I was implementing when they look at my commit message in 
five years if it refers to the living standard that changes over time.

- R. Niwa




Copying multi-range selection

2015-08-14 Thread Ryosuke Niwa
Hi all,

We've been recently exploring ways to select bidirectional text and content 
that uses new CSS layout modes such as flex box in visually contagious manner.

Because visually contagious range of content may not be contagious in DOM 
order, doing so involves creating a disjoint multi-range selection.  There has 
been quite a bit of discussion about how we can better expose that to the Web 
since the current model of exposing a list of Range objects doesn't seem to be 
working well.

However, another important question I have is how copying such a selected 
content work?  Do we just stitch together disjoint content?  But that may 
result in the content being pasted in completely different order.

Does anyone from Mozilla can share their experience about this?

- R. Niwa




Re: alternate view on constructors for custom elements

2015-07-17 Thread Ryosuke Niwa

 On Jul 17, 2015, at 1:14 PM, Travis Leithead travis.leith...@microsoft.com 
 wrote:
 
 From: Domenic Denicola [mailto:d...@domenic.me] 
 
 window.XFoo = document.registerElement(‘x-foo’, XFooStartup);
 
 Why is XFoo different from XFooStartup? If I define a method in XFooStartup, 
 does it exist in XFoo?
 
 This won't work as I described it, given what you've told me, but my 
 assumption was essentially, that XFooStartup would act as if it didn't really 
 depend on HTMLElement for construction. So, it's constructor wouldn't be 
 linked to the actual custom element creation. Therefore XFoo (the 
 platform-provided constructor function is the thing that is actually used to 
 trigger construction, which would then result in the XFooStartup constructor 
 running. Basically, like this (reverting to non-class syntax):
 
 function XFooStartup(val1, val2) {
  this.prop = val1;
  this.prop2 = val2;
 }
 window.XFoo = document.registerElement(‘x-foo’, XFooStartup);
 
 all I was trying to express different from the current design is: 
 1) replacing the createdCallback with the function constructor (and passing 
 the new element instance as 'this' when calling it)
 2) passing through params from the XFoo platform-native constructor to the 
 XFooStartup function
 3) calling XFooStartup synchronously

We can do this without wrapping author supplied constructor.  In ES6/ES2015 
classes, `this` variable is in the temporary dead zone (TDZ) until `super()` 
which allocates `this` and any attempt to access it will throw 
`ReferenceError`.  In other words, XFooStartup has no way of accessing the 
newly constructed object until `super()` has returned.  This in turns allows 
browser engines to create a native (C++) backing store for the HTML element 
inside HTMLElement’s constructor (or an equivalent code that runs as a part of 
call to `super()` from the direct subclass of HTMLElement) since the newly 
constructed (this) element is never accessed until the highest superclass' 
`super()` (which is HTMLElement in this case) had been called.

- R. Niwa




Re: Custom Elements: createdCallback cloning

2015-07-13 Thread Ryosuke Niwa

 On Jul 12, 2015, at 11:30 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Mon, Jul 13, 2015 at 1:10 AM, Dominic Cooney domin...@google.com wrote:
 Yes. I am trying to interpret this in the context of the esdiscuss thread
 you linked. I'm not sure I understand the problem with private state,
 actually. Private state is allocated for DOM wrappers in Chromium today
 (like Gecko), including Custom Elements; it's not a problem. DOM wrapper
 creation is controlled by the UA, which can arrange for allocating the
 slots.
 
 Sure, but this assumes elements will be backed by something other than
 JavaScript forever. Or at the very least that custom elements will
 always be able to do less than builtin elements.
 
 
 Is there a plan for author classes to be able to have private state or
 something?
 
 Yes, as discussed in that es-discuss thread.
 
 
 Thanks. I can understand how editing and Range.cloneContents would use
 cloning. How is it relevant that Range is depended on by Selection?
 Selection may delete things but it does not clone them.
 
 Editing operations operate on selections, but maybe I'm mistaken about
 that? Either way, you got the problem.

Editing operations use cloning heavily.  As counter-intuitive as it sounds, 
deleting a range of text also involves cloning elements in some cases.

- R. Niwa




Re: Components F2F

2015-07-02 Thread Ryosuke Niwa

 On Jun 30, 2015, at 2:55 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 Can someone update
 https://www.w3.org/wiki/Webapps/WebComponentsJuly2015Meeting with a
 bit more information? I hear it might be in Mountain View?

Is Google hosting this meeting as well?  Alternatively, would other browser 
vendors (e.g. Mozilla) willing to host it this time?

Unfortunately, it's going to be really hard to reserve a big enough room at 
Apple :(  I'm hoping that'll change once we move to the new spaceship but we'll 
see...

 Will we have sufficient time to cover both Custom Elements and Shadow
 DOM? And could the drafts maybe be updated to cover what has been
 agreed to so far? E.g. it seems we have agreement on slots and I think
 that was the last major thing from Shadow DOM that needed a solution.
 Would be great if we could review a complete document.

I'm supposed to write up a document that summaries what we've agreed thus far.  
I've been busy with other stuff lately but will try to post it before the 
meeting.

- R. Niwa




Re: Custom Elements: is=

2015-07-02 Thread Ryosuke Niwa

 On Jun 13, 2015, at 4:49 PM, Léonie Watson lwat...@paciellogroup.com wrote:
 
 From: Bruce Lawson [mailto:bru...@opera.com] 
 Sent: 13 June 2015 16:34
 
 On 13 June 2015 at 15:30, Léonie Watson lwat...@paciellogroup.com wrote:
 why not use the extends= syntax you mentioned?
 
 my-button extends=button attributesPush/my-button
 
 because browsers that don't know about web components wouldn't pay any 
 attention to my-button, and render Push as plain text.
 
 Of course! I should have thought of that.

That's not entirely true.  If the implementation of my-button, let us call it 
MyButtonElement had the prototype that extends HTMLButtonElement, then the 
browser can set role=button just fine.

 On Jun 13, 2015, at 5:41 PM, Patrick H. Lauke re...@splintered.co.uk wrote:
 
 On 13/06/2015 16:33, Bruce Lawson wrote:
 On 13 June 2015 at 15:30, Léonie Watson lwat...@paciellogroup.com wrote:
 why not use the extends= syntax you mentioned?
 
 my-button extends=button attributesPush/my-button
 
 because browsers that don't know about web components wouldn't pay any
 attention to my-button, and render Push as plain text.
 
  Browsers that don't know about web components will fall back to
 button with button
 this-is-made-much-more-marvellous-by=my-button (or whatever)
 
 However, this fallback will only really be useful for very simple cases, 
 where web components have been used to jazz up what essentially is still the 
 element that was extended. And, I would posit, any scripting that was done to 
 act on the all-singing, all-dancing new web component button (if it does 
 anything more than a regular button) would not work for the fallback. Unless 
 it's really just using web components for fancy styling (for instance having 
 a material design button that essentially still works just like a button) - 
 in which case, it makes more sense to work on stylability of standard 
 elements.


Precisely!  I've been saying that for the last two years.  It's so nice  
refreshing to hear someone making the same argument :)  And we (Apple) would 
love to solve the stylability issue of form elements.

- R. Niwa




Re: Custom Elements: is=

2015-06-08 Thread Ryosuke Niwa

 On Jun 8, 2015, at 2:16 PM, Alice Boxhall aboxh...@google.com wrote:
 
 Did anyone have any further thoughts on this? My concerns haven't changed.

Nothing new.

 On Sat, May 9, 2015 at 3:34 PM, Alice Boxhall aboxh...@google.com wrote:
 On Thu, May 7, 2015 at 1:00 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, May 6, 2015 at 6:59 PM, Alice Boxhall aboxh...@google.com wrote:
  I definitely acknowledge is= may not be the ideal solution to the latter
  problem - it definitely has some holes in it, especially when you start
  adding author shadow roots to things - but I think it does have potential.
  I'd really like to be convinced that we either have a reasonable 
  alternative
  solution, or that it's not worth worrying about.
 
 I think it is worth worrying about, but I don't think it's worth
 holding up a minimal v1 of Custom Elements for. The way to get
 agreement among all parties is to do less. And then take baby steps
 from.
 
 I can definitely see that logic.
 
 My concerns with this in practice are:
 
 - In the time between v1 and v2 (however long that ends up being) we are 
 left without any way to solve this problem, assuming we don't come up with 
 something else for v1. If developers start using custom elements where they 
 would previously have used a native element, this could well result in a 
 regression in accessibility for the web as a whole for this period, and we 
 will be stuck with the legacy of code written during this period for much 
 longer as well.

Web developers are already writing their own custom elements as a bunch of 
nested div's.  How does introducing custom elements make it worse?

The argument that it'll make things worse between v1 and v2 is moot because we 
haven't agreed on anything. is= syntax may never be realized due to various 
issues associated with it.

 - I realise that to some extent developers already aren't using native 
 elements, in part because of the styling issues we've discussed which also 
 affect is=. My concern here is that custom elements will further legitimise 
 this habit, which we've been making some recent progress in changing - we 
 stand to backslide on that effort. Having is= would allow us to roll it into 
 the use native elements where possible message rather than diluting it 
 with unless you're using a custom element in which case here's a checklist 
 which you're not going to look at of everything it should do until we come 
 up with an alternative.

In the case of stylizing elements, it doesn't really matter if authors attach a 
shadow DOM on top of a builtin input element or to a div because as soon as the 
shadow DOM replaces the rendered contents, we can't make assumptions about how 
to expose that element to AT.

 The way I look at this is that currently you have nothing, since only
 Chrome ships this. There's a chance to get three more browsers if you
 make some concessions on the warts. And then hopefully we can iterate
 from there in a more cooperative fashion.
 
 Here's where we differ, because:
 - I don't think it's a wart. I've given this a great deal of thought and I 
 keep ending up back at the current syntax when I try to think of reasonable 
 alternatives, even assuming we could magically fix all the implementation 
 issues with any alternative proposal.

FWIW, we (Apple) definitely dislike is= syntax as currently (or formerly) 
spec'ed.

 - I don't think shipping in one browser is nothing. People (both framework 
 authors and web page authors) are already writing code using is=.

Some developers are always going to use a feature only implemented by a single 
browser (ActiveX in Trident, NaCl in Chrome, current web components 
implementation in Chrome). In fact, why would any browser vendor ship a feature 
that's not going to be used by anyone? However, that doesn't mean the feature 
will be standardized or adopted by other browser vendors.

- R. Niwa




Re: Custom Elements: is=

2015-06-08 Thread Ryosuke Niwa

 On Jun 8, 2015, at 3:23 PM, Alice Boxhall aboxh...@google.com wrote:
 
 On Mon, Jun 8, 2015 at 3:12 PM, Ryosuke Niwa rn...@apple.com 
 mailto:rn...@apple.com wrote:
 
  On Jun 8, 2015, at 2:16 PM, Alice Boxhall aboxh...@google.com 
  mailto:aboxh...@google.com wrote:
 Web developers are already writing their own custom elements as a bunch of 
 nested div's.  How does introducing custom elements make it worse?
 
 I believe the rest of my comment already addressed this.

Sorry, I don't follow.

  - I realise that to some extent developers already aren't using native 
  elements, in part because of the styling issues we've discussed which 
  also affect is=. My concern here is that custom elements will further 
  legitimise this habit, which we've been making some recent progress in 
  changing - we stand to backslide on that effort. Having is= would allow 
  us to roll it into the use native elements where possible message 
  rather than diluting it with unless you're using a custom element in 
  which case here's a checklist which you're not going to look at of 
  everything it should do until we come up with an alternative.
 
 In the case of stylizing elements, it doesn't really matter if authors 
 attach a shadow DOM on top of a builtin input element or to a div because as 
 soon as the shadow DOM replaces the rendered contents, we can't make 
 assumptions about how to expose that element to AT.
 
 That's simply not true at all. If someone replaces the rendered content of a 
 `button`, we know that their intent is to create an element which is 
 semantically a button, and may even be rendered as a button in some cases. 
 Similarly for `input` with a `type` attribute. This is no different to 
 using an ARIA role as far as assistive technology is concerned.

Perhaps I should have said we can't _always_ make assumptions about how to 
expose that element to AT.

Consider creating a date picker in the time we haven't added type=date yet.  
Inside the shadow DOM of this color picker may contain various buttons and 
controls to move between months and pick a date.  Treating the entire control 
as a text field will provide a very poor user experience.

All the use cases I can think of that let UA can safely make assumptions about 
the ARIA role of the element involves tweaking the appearance, which is better 
served by better styling mechanisms for form controls.


  The way I look at this is that currently you have nothing, since only
  Chrome ships this. There's a chance to get three more browsers if you
  make some concessions on the warts. And then hopefully we can iterate
  from there in a more cooperative fashion.
 
  Here's where we differ, because:
  - I don't think it's a wart. I've given this a great deal of thought and I 
  keep ending up back at the current syntax when I try to think of 
  reasonable alternatives, even assuming we could magically fix all the 
  implementation issues with any alternative proposal.
 
 FWIW, we (Apple) definitely dislike is= syntax as currently (or formerly) 
 spec'ed.
 
 Any chance you could go into more detail on that? What exactly is it you 
 dislike about it?

See our replies on the topic on public-webapps.  I don't have a time to collect 
all our replies or restate all the problems we have with is= syntax.  (Perhaps 
I should put up a document somewhere to reference since someone brings up the 
same question every six months or so, and I'm getting sick and tried to even 
say that I don't have a time to re-iterate the same points over and over...)

  - I don't think shipping in one browser is nothing. People (both 
  framework authors and web page authors) are already writing code using 
  is=.
 
 Some developers are always going to use a feature only implemented by a 
 single browser (ActiveX in Trident, NaCl in Chrome, current web components 
 implementation in Chrome). In fact, why would any browser vendor ship a 
 feature that's not going to be used by anyone? However, that doesn't mean 
 the feature will be standardized or adopted by other browser vendors.
 
 No, but unlike the first two examples, this is a proposed web standard.

Well, not everything people propose as a standard becomes a standard.  When a 
proposed standard is rejected, however, people tend to come up with an 
alternative proposal to address those issues.  As far as is= syntax is 
concerned, I haven't heard of any proposals to fix it so as to address 
everyone's concerns.

If you're really passionate about this feature, I suggest you can go dig all 
the discussions we've had in the last three years and see if you can resolve 
all the concerns raised by various participants of the working group.  Now, 
some of those concerns we and others have raised may not be relevant, and they 
may have changed their positions and opinions on matters.  Regardless, I don't 
think keep saying you like (or that you think we need) this feature without 
proposing fixes to raised concerns is productive.

- R

Re: Custom Elements: is=

2015-06-08 Thread Ryosuke Niwa

 On Jun 8, 2015, at 4:37 PM, Alice Boxhall aboxh...@google.com wrote:
 
 
 
 On Mon, Jun 8, 2015 at 4:23 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 On Jun 8, 2015, at 3:23 PM, Alice Boxhall aboxh...@google.com wrote:
 
 On Mon, Jun 8, 2015 at 3:12 PM, Ryosuke Niwa rn...@apple.com wrote:
 
  On Jun 8, 2015, at 2:16 PM, Alice Boxhall aboxh...@google.com wrote:
 Web developers are already writing their own custom elements as a bunch 
 of nested div's.  How does introducing custom elements make it worse?
 
 I believe the rest of my comment already addressed this.
 
 Sorry, I don't follow.
 
  - I realise that to some extent developers already aren't using native 
  elements, in part because of the styling issues we've discussed which 
  also affect is=. My concern here is that custom elements will further 
  legitimise this habit, which we've been making some recent progress in 
  changing - we stand to backslide on that effort. Having is= would allow 
  us to roll it into the use native elements where possible message 
  rather than diluting it with unless you're using a custom element in 
  which case here's a checklist which you're not going to look at of 
  everything it should do until we come up with an alternative.
 
 In the case of stylizing elements, it doesn't really matter if authors 
 attach a shadow DOM on top of a builtin input element or to a div because 
 as soon as the shadow DOM replaces the rendered contents, we can't make 
 assumptions about how to expose that element to AT.
 
 That's simply not true at all. If someone replaces the rendered content of 
 a `button`, we know that their intent is to create an element which is 
 semantically a button, and may even be rendered as a button in some cases. 
 Similarly for `input` with a `type` attribute. This is no different to 
 using an ARIA role as far as assistive technology is concerned.
 
 Perhaps I should have said we can't _always_ make assumptions about how to 
 expose that element to AT.
 
 Consider creating a date picker in the time we haven't added type=date yet.  
 Inside the shadow DOM of this color picker may contain various buttons and 
 controls to move between months and pick a date.  Treating the entire 
 control as a text field will provide a very poor user experience.
 
 Ah, I see what you're saying now, thanks for clarifying.
 
 In this case, the custom element author can add semantic markup inside Shadow 
 DOM just as the browser does for a date picker currently - no assumptions 
 need to be made, since even in the case of type extensions the Shadow DOM is 
 available to the accessibility tree. I don't think it will ever treat the 
 entire control as a text field.

If you're fine with that, why don't you just let authors just put ARIA role in 
buttons' shadow DOM as well?

It would also mean that the author must override the ARIA role implicitly set 
(text field) by UA in this case.  I'd say that's exactly the kind of feature 
that makes the Web platform annoying.

 All the use cases I can think of that let UA can safely make assumptions 
 about the ARIA role of the element involves tweaking the appearance, which 
 is better served by better styling mechanisms for form controls.
  
 I don't think it's an either/or question between is= and styling mechanisms 
 for form controls. I actually think we need both.

Why?  Having authors use two completely different mechanisms to define 
semantics seems like the worst of the both worlds.

- R. Niwa




Re: [ime-api] [blink-dev] Removing IME API code from Blink

2015-05-27 Thread Ryosuke Niwa

 On May 27, 2015, at 11:46 AM, Travis Leithead travis.leith...@microsoft.com 
 wrote:
 
 I believed the use-cases for avoiding UI clashes between site-driven 
 auto-complete lists and IME auto-complete boxes is still a valid use case, 
 and I think the spec is still valid to try to push to recommendation. 
 However, I'd also like to follow up on usage of the ms- prefixed API so that 
 I can get an idea of what its real usage is.

I agree avoiding UI clashes between auto-completions of IME and web page is a 
great use case but I'm not convinced that exposing ClientRect for IME is the 
right API as many Web developers aren't even aware of UI challenges IME 
imposes. For example, a similar UI challenge emerges when dealing with 
auto-corrections in grammar/spell checking features as well.  It would be ideal 
if IME and spell/grammar corrections are handled in a similar manner so that 
Web apps supporting either feature will just work with both features.

- R. Niwa




Re: [webcomponents] How about let's go with slots?

2015-05-22 Thread Ryosuke Niwa

 On May 21, 2015, at 11:33 PM, Wilson Page wilsonp...@me.com wrote:
 
 From experience building components for Firefox OS I think the 'default slot' 
 pattern will fulfill most of our use-cases. This means we won't have to 
 burden our component users by requiring them to add `slot=foo` all over the 
 place.

Could you clarify what you're referring to by 'default slot' pattern?

 Is it fair to say that if this declarative API lands in V1, it's unlikely 
 we'll see imperative API in V2? Have we not exhausted all the options?

At F2F, basically all browser vendors have agreed that we eventually want the 
imperative API, and we (Apple) are certainly interested in the imperative API 
being included in v2.

- R. Niwa




Re: [webcomponents] How about let's go with slots?

2015-05-18 Thread Ryosuke Niwa

 On May 18, 2015, at 11:18 AM, Dimitri Glazkov dglaz...@google.com wrote:
 
 On Fri, May 15, 2015 at 4:58 PM, Scott Miles sjmi...@google.com wrote:
 Polymer really wants Shadow DOM natively, and we think the `slot` proposal 
 can work, so maybe let's avoid blocking on design of an imperative API 
 (which we still should make in the long run).
 
 As our entire stack is built on Web Components, the Polymer team is highly 
 motivated to assist browser implementers come to agreement on a Shadow DOM 
 specification. Specifically, as authors of the `webcomponents-js` polyfills, 
 and more than one Shadow DOM shim, we are keenly aware of how difficult 
 Shadow DOM is to simulate without true native support.
 
 I believe we are in general agreement with the implementers that an 
 imperative API, especially one that cleanly explains platform behavior, is 
 an ideal end point for Shadow DOM distribution. However, as has been 
 discussed at length, it’s likely that a proper imperative API is blocked on 
 other still-to-mature technologies. For this reason, we would like for the 
 working group to focus on writing the spec for the declarative `slot` 
 proposal [1]. We're happy to participate in the process.
 
 [1]: 
 https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution.md#proposal-part-1-syntax-for-named-insertion-points
 
 It sounds like we are no longer in disagreement on the F. Slots Proposal 
 item from the April 2015 Meeting 
 (https://www.w3.org/wiki/Webapps/WebComponentsApril2015Meeting), so we don't 
 need to block it on the C. The imperative distribution API item.
 
 Given that all vendors agreed that C can wait until v2, we could just focus 
 on concretizing the slots proposal and then put a lid on Shadow DOM v1.
 
 What do you think, folks?

We (Apple) support focusing on the slot proposal and deferring the imperative 
API to v2 or at least not blocking the discussion for the named slots.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-08 Thread Ryosuke Niwa

 On May 7, 2015, at 7:20 PM, Hayato Ito hay...@chromium.org wrote:
 
 Ryosuke, could you file a bug for the spec if you find an uncomfortable part 
 in the spec?
 I want to understand exactly what you are trying to improve.

I don't think there is any issue with the spec per se.  What Anne and I both 
are pointing out is that event path isn't a style concept so node distribution 
can't be thought of as a style concept.

- R. Niwa




Re: Custom Elements: Upgrade et al

2015-05-07 Thread Ryosuke Niwa

 On May 6, 2015, at 9:48 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Thu, May 7, 2015 at 12:23 AM, Ryosuke Niwa rn...@apple.com wrote:
 Are you suggesting that cloning my-button will create a new instance of 
 my-button by invoking its constructor?
 
 No, I'm saying there would be another primitive operation, similar to
 the extended structured cloning proposed elsewhere, to accomplish
 cloning without side effects.

Okay. Do you have any idea / proposal as to how that would look like?  I'm 
still not fully grasping what you're proposing to do when we close a custom 
element.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-07 Thread Ryosuke Niwa

 On May 6, 2015, at 11:10 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 On Wed, May 6, 2015 at 11:08 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, May 7, 2015 at 6:02 AM, Hayato Ito hay...@chromium.org wrote:
  I'm saying:
  - Composed tree is related with CSS.
  - Node distribution should be considered as a part of style concept.
 
 Right, I think Ryosuke and I simply disagree with that assessment. CSS
 operates on the composed tree (and forms a render tree from it).
 Events operate on the composed tree. Selection operates on the
 composed tree (likely, we haven't discussed this much).
 
 Selection operates on the render tree. The current selection API is 
 (completely) busted for modern apps, and a new one is needed that's based 
 around layout. Flexbox w/ order, positioned objects, distributions, grid, 
 none of them work with the DOM based API.

Please state your presumptions like that before making a statement such as 
composed street is a style concept.

Now, even if selection were to operate on the CSS box tree, of which I will not 
express my opinion of, event path is still not a style concept.  If you're 
proposing to make it a style concept, then I just need to object to that.

- R. Niwa




Re: Custom Elements: Upgrade et al

2015-05-06 Thread Ryosuke Niwa

 On May 6, 2015, at 8:37 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Wed, May 6, 2015 at 4:59 PM, Dimitri Glazkov dglaz...@google.com wrote:
 On Wed, May 6, 2015 at 7:50 AM, Domenic Denicola d...@domenic.me wrote:
 Can you explain how you envision cloning to work a bit more? Somehow there
 will be instances of these elements which are not created by their
 constructors?
 
 Also, how is it in any way similar to  how canvas or input work? I am
 pretty sure both of those are constructed during cloning.
 
 The proposal would be to change that. You can construct an instance.
 And you can create a second instance either by cloning the first or
 constructing a new one.
 
 The difference between canvas and input is that for input we
 also clone private state using this hook:
 
  https://dom.spec.whatwg.org/#concept-node-clone-ext
 
 canvas does no such thing.

Are you suggesting that cloning my-button will create a new instance of 
my-button by invoking its constructor?

- R. Niwa




Re: Shadow DOM: state of the distribution API

2015-05-06 Thread Ryosuke Niwa

 On May 6, 2015, at 10:57 AM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Wed, May 6, 2015 at 2:05 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 1) Synchronous, no flattening of content. A host element's shadow
 tree has a set of slots each exposed as a single content element to
 the outside. Host elements nested inside that shadow tree can only
 reuse slots from the outermost host element.
 
 2) Synchronous, flattening of content. Any host element nested
 inside a shadow tree can get anything that is being distributed.
 (Distributed content elements are unwrapped and not distributed
 as-is.)
 
 3) Lazy. A distinct global (similar to the isolated Shadow DOM story)
 is responsible for distribution so it cannot observe when distribution
 actually happens.
 
 Has at-end-of-microtask been debated rather than 1/2?

Yes. 1 and 2 don't really specify when as Anne pointed out. Timing is discussed 
separately at [1].

 Synchronous always has the downside that the developer has to deal with 
 reentrancy.

I think we can make the same argument we made for custom elements. Since the 
distribution is done by each shadow DOM's implementation, there is a clear 
ownership. We also can't think of a use case in which `distribute` must be 
called recursively so we might want to ban it altogether (and throw) if that 
were the concern.

 End-of-microtask does have the downside that API calls which
 synchronously return layout information get the wrong values. Or
 rather, values that might change at end of microtask.
 
 But calling sync layout APIs is generally a bad idea for perf anyway.
 If we introduced async versions of getComputedStyle,
 getBoundingClientRect etc, then we could make those wait to return a
 value until all content had been distributed into insertion points.
 
 Of course, adding async layout accessors is a non-trivial project, but
 it's long over due.

I think the biggest concern with this approach is that it essentially forces 
the existing web pages to be rewritten in order to adopt shadow DOM. I'm not 
certain if it's the end of the world but it will surely raise the barrier of 
using shadow DOM.

[1] 
https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Imperative-API-for-Node-Distribution-in-Shadow-DOM.md#api-for-triggering-distribution




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-06 Thread Ryosuke Niwa

 On May 5, 2015, at 10:53 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Wed, May 6, 2015 at 3:22 AM, Ryosuke Niwa rn...@apple.com wrote:
 Where?  I have not yet to see a use case for which selective redistribution 
 of nodes (i.e. redistributing only a non-empty strict subset of nodes from 
 an insertion point) are required.
 
 Isn't that what e.g. select does? That is, select only cares about
 option and optgroup elements that are passed to it.

Or it could just distribute all the elements and have do:
```css
::content * { display:none; }
::content option, optgroup { display:block; }
```

Dimitri just added a document describing how we can turn partial distribution 
into whole distribution here (thanks Dimitri!):
https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Partial-Redistributions-Analysis.md

- R. Niwa




Re: Shadow DOM: state of the distribution API

2015-05-06 Thread Ryosuke Niwa

 On May 6, 2015, at 2:39 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 The 3 proposal is what the houdini effort is already researching for custom 
 style/layout/paint. I don't think it's acceptable to make all usage of Shadow 
 DOM break when used with libraries that read layout information today, ie. 
 offsetTop must work. I also don't think it's acceptable to introduce new 
 synchronous hooks and promote n^2 churn in the distribution.

Sorry, I don't follow. If we're making offsetTop to synchronously return the 
correct value, then authors can easily write code that performs in Ω(n^2) by 
obtaining the value of offsetTop between adding/removing direct children of a 
shadow host.  On the other hand, if we're trying to prevent O(n^2) behavior, 
then we should be adding API to asynchronously retrieve layout information.

- R. Niwa




Re: Custom Elements: is=

2015-05-06 Thread Ryosuke Niwa

 On May 6, 2015, at 6:25 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 Open issues are kept track of here:
 
  https://wiki.whatwg.org/wiki/Custom_Elements
 
 I think we reached rough consensus at the Extensible Web Summit that
 is= does not do much, even for accessibility. Accessibility is
 something we need to tackle low-level by figuring out how builtin
 elements work:
 
  https://github.com/domenic/html-as-custom-elements
 
 And we need to tackle it high-level by making it easier to style
 builtin elements:
 
  http://dev.w3.org/csswg/css-forms/
 
 And if the parser functionality provided by is= is of great value,
 we should consider parsing elements with a hyphen in them differently.
 Similar to how script and template are allowed pretty much
 everywhere.
 
 Therefore, I propose that we move subclassing of builtin elements to
 v2, remove is= from the specification, and potentially open an issue
 on HTML parser changes.

We (Apple) support this proposal.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-06 Thread Ryosuke Niwa

 On May 6, 2015, at 6:18 PM, Hayato Ito hay...@chromium.org wrote:
 
 On Wed, May 6, 2015 at 10:22 AM Ryosuke Niwa rn...@apple.com wrote:
 
  On May 5, 2015, at 11:55 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 
  On Tue, May 5, 2015 at 11:20 AM, Ryosuke Niwa rn...@apple.com wrote:
  On May 4, 2015, at 10:20 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
  On Tue, May 5, 2015 at 6:58 AM, Elliott Sprehn espr...@chromium.org 
  wrote:
  We can solve this
  problem by running the distribution code in a separate scripting context
  with a restricted (distribution specific) API as is being discussed for
  other extension points in the platform.
 
  That seems like a lot of added complexity, but yeah, that would be an
  option I suppose. Dimitri added something like this to the imperative
  API proposal page a couple of days ago.
 
 
  One thing to consider here is that we very much consider distribution a
  style concept. It's about computing who you inherit style from and 
  where you
  should be in the box tree. It just so happens it's also leveraged in 
  event
  dispatch too (like pointer-events). It happens asynchronously from DOM
  mutation as needed just like style and reflow though.
 
  I don't really see it that way. The render tree is still computed from
  the composed tree. The composed tree is still a DOM tree, just
  composed from various other trees. In the open case you can access
  it synchronously through various APIs (e.g.  if we keep that for
  querySelector() selectors and also deepPath).
 
  I agree. I don't see any reason node distribution should be considered as 
  a style concept. It's a DOM concept. There is no CSS involved here.
 
  Yes there is.  As Elliot stated in the elided parts of his quoted
  response above, most of the places where we update distribution are
  for CSS or related concerns:
 
  # 3 event related
  # 3 shadow dom JS api
 
 These two are nothing to do with styles or CSS.
 
 I'd like to inform all guys in this thread that Composed Tree is for 
 resolving CSS inheritance by the definition.
 See the Section 2.4 Composed Trees in the spec:
 http://w3c.github.io/webcomponents/spec/shadow/#composed-trees
 
 Let me quote:
  If an element doesn't participate in a composed tree whose root node is a 
  document, the element must not appear in the formating structure [CSS21] 
  nor create any CSS box. This behavior must not be overridden by setting the 
  'display' property.
 
  In resolving CSS inheritance, an element must inherit from the parent node 
  in the composed tree, if applicable.
 
 The motivation of a composed tree is to determine the parent node in 
 resolving CSS inheritance. There is no other significant usages, except for 
 event path.

Event path / retargeting is definitely event related, and it (e.g. deepPath) 
is definitely a part of shadow DOM JS API.  Again, they're nothing to do with 
styles or CSS.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-05 Thread Ryosuke Niwa

 On May 5, 2015, at 11:55 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 
 On Tue, May 5, 2015 at 11:20 AM, Ryosuke Niwa rn...@apple.com wrote:
 On May 4, 2015, at 10:20 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Tue, May 5, 2015 at 6:58 AM, Elliott Sprehn espr...@chromium.org 
 wrote:
 We can solve this
 problem by running the distribution code in a separate scripting context
 with a restricted (distribution specific) API as is being discussed for
 other extension points in the platform.
 
 That seems like a lot of added complexity, but yeah, that would be an
 option I suppose. Dimitri added something like this to the imperative
 API proposal page a couple of days ago.
 
 
 One thing to consider here is that we very much consider distribution a
 style concept. It's about computing who you inherit style from and where 
 you
 should be in the box tree. It just so happens it's also leveraged in event
 dispatch too (like pointer-events). It happens asynchronously from DOM
 mutation as needed just like style and reflow though.
 
 I don't really see it that way. The render tree is still computed from
 the composed tree. The composed tree is still a DOM tree, just
 composed from various other trees. In the open case you can access
 it synchronously through various APIs (e.g.  if we keep that for
 querySelector() selectors and also deepPath).
 
 I agree. I don't see any reason node distribution should be considered as a 
 style concept. It's a DOM concept. There is no CSS involved here.
 
 Yes there is.  As Elliot stated in the elided parts of his quoted
 response above, most of the places where we update distribution are
 for CSS or related concerns:
 
 # 3 event related
 # 3 shadow dom JS api

These two are nothing to do with styles or CSS.

 I have issues with the argument that we should do it lazily.  On one hand, 
 if node distribution is so expensive that we need to do it lazily, then it's 
 unacceptable to make event dispatching so much slower.  On the other hand, 
 if node distribution is fast, as it should be, then there is no reason we 
 need to do it lazily.
 
 The problem is really the redistributions. If we instead had explicit 
 insertion points under each shadow host, then we wouldn't really need 
 redistributions at all, and node distribution can happen in O(1) per child 
 change.
 
 As repeatedly stated, redistribution appears to be a necessity for
 composition to work in all but the most trivial cases.

Where?  I have not yet to see a use case for which selective redistribution of 
nodes (i.e. redistributing only a non-empty strict subset of nodes from an 
insertion point) are required.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-05 Thread Ryosuke Niwa

 On May 4, 2015, at 10:20 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Tue, May 5, 2015 at 6:58 AM, Elliott Sprehn espr...@chromium.org wrote:
 We can solve this
 problem by running the distribution code in a separate scripting context
 with a restricted (distribution specific) API as is being discussed for
 other extension points in the platform.
 
 That seems like a lot of added complexity, but yeah, that would be an
 option I suppose. Dimitri added something like this to the imperative
 API proposal page a couple of days ago.
 
 
 One thing to consider here is that we very much consider distribution a
 style concept. It's about computing who you inherit style from and where you
 should be in the box tree. It just so happens it's also leveraged in event
 dispatch too (like pointer-events). It happens asynchronously from DOM
 mutation as needed just like style and reflow though.
 
 I don't really see it that way. The render tree is still computed from
 the composed tree. The composed tree is still a DOM tree, just
 composed from various other trees. In the open case you can access
 it synchronously through various APIs (e.g.  if we keep that for
 querySelector() selectors and also deepPath).

I agree. I don't see any reason node distribution should be considered as a 
style concept. It's a DOM concept. There is no CSS involved here.

I have issues with the argument that we should do it lazily.  On one hand, if 
node distribution is so expensive that we need to do it lazily, then it's 
unacceptable to make event dispatching so much slower.  On the other hand, if 
node distribution is fast, as it should be, then there is no reason we need to 
do it lazily.

The problem is really the redistributions. If we instead had explicit insertion 
points under each shadow host, then we wouldn't really need redistributions at 
all, and node distribution can happen in O(1) per child change.

- R. Niwa



Re: Inheritance Model for Shadow DOM Revisited

2015-05-01 Thread Ryosuke Niwa

 On May 1, 2015, at 1:04 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Thu, Apr 30, 2015 at 11:35 PM, Ryosuke Niwa rn...@apple.com wrote:
 To start off, I can think of three major ways by which subclass wants to 
 interact with its superclass:
 1. Replace what superclass shows entirely by its own content - e.g. grab the 
 device context and draw everything by yourself.
 
 So this requires either replacing or removing superclass' ShadowRoot.
 
 2. Override parts of superclass' content - e.g. subclass overrides virtual 
 functions superclass provided to draw parts of the component/view.
 
 This is where you directly access superclass' ShadowRoot I assume and
 modify things?

In the named slot approach, these overridable parts will be exposed to 
subclasses as an overridable slot. In terms of an imperative API, it means that 
the superclass has a virtual method (probably with a symbol name) that can get 
overridden by a subclass. The default implementation of such a virtual method 
does nothing, and shows the fallback contents of the slot.

 3. Fill holes superclass provided - e.g. subclass implements abstract 
 virtual functions superclass defined to delegate the work.
 
 This is the part that looks like it might interact with distribution, no?

With the named slot approach, we can also model this is an abstract method on 
the superclass that a subclass must implement. The superclass' shadow DOM 
construction code then calls this function to fill the slot.

- R. Niwa




Re: Inheritance Model for Shadow DOM Revisited

2015-04-30 Thread Ryosuke Niwa

 On Apr 29, 2015, at 9:17 PM, Hayato Ito hay...@chromium.org wrote:
 
 Thanks. As far as my understanding is correct, the conclusions so far are:
 
 - There is no use cases which shadow as function can't support, but 
 content slot can support.
 - there are use cases which shadow as function can support, but content 
 slot can't support.

I disagree. What shadow as function provides is an extra syntax by which 
authors can choose elements. That's not a use case. A use case is a solution 
for a concrete user scenario such as building a social network button.

 - shadow as function is more expressive than content slot

Again, I disagree.

 - content slot is trying to achieve something by removing expressiveness 
 from web developers, instead of trusting them.
 
 I still don't understand fully what the proposal is trying to achieve. I've 
 never heard such a complain, content select is too expressive and easy to 
 be misused. Please remove it, from web developers.
 
 I think any good APIs could be potentially wrongly used by a web developer. 
 But that shouldn't be a reason that we can remove a expressive API from web 
 developers who can use it correctly and get benefits from the expressiveness.

Now let me make an analogous comparison between C++ and assembly language.

- There is no use cases which assembly can't support, but C++ can support.
- There are use cases which assembly can support, but C++ can't support.
- Assembly language is more expressive than C++.
- C++ is trying to achieve something by removing expressiveness from 
programmers, instead of trusting them.

Does that mean we should all be coding in assembly? Certainly not.

For a more relevant analogy, one could construct the entire document using 
JavaScript without using HTML at all since DOM API exposed to JavaScript can 
construct the set of trees which is a strict superset of what HTML tree 
building algorithm can generate. Yet, we don't see that happening even in the 
top tier Web apps just because DOM API is more expressive. The vast majority of 
Web apps still use plenty of templates and declarative formats to construct DOM 
for simplicity and clarity even though imperative alternatives are strictly 
more powerful.

Why did we abandon XHTML2.0? It was certainly more expressive. Why not SGML? 
It's a lot more expressive than XML. You can re-define special character as 
you'd like. Because expressiveness is not necessary the most desirable 
characteristics of anything by itself. The shape of a solution we need depends 
on the kind of problems we're solving.

- R. Niwa




Re: Inheritance Model for Shadow DOM Revisited

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 1:47 AM, Hayato Ito hay...@chromium.org wrote:
 
 Thanks, let me update my understanding:
 
 - There is no use cases which shadow as function can't support, but 
 content slot can support.
 - The purpose of the proposal is to remove an *extra* syntax. There is no 
 other goals.
 - There is no reason to consider content slot proposal if we have a use 
 case which this *extra* syntax can achieve.

That's not at all what I'm saying. As far as we (Apple) are concerned, 
shadow as a function as a mere proposal just as much as our content 
slot is a proposal since you've never convinced us that shadow as a 
function is a good solution for shadow DOM inheritance. Both proposals should 
be evaluated based on concrete use cases.

And even if there are use cases for which a given proposal (either shadow as 
a function or named slot) doesn't adequately address, there are multiple 
options to consider:
1. Reject the use case because it's not important
2. Defer the use case for future extensions
3. Modify the proposal as needed
4. Reject the proposal because above options are not viable

 I'm also feeling that several topic are mixed in the proposal, Imperative 
 APIs, Multiple Templates and content slot, which makes me hard to 
 understand the goal of each.
 Can I assume that the proposal is trying to remove content select, not 
 only from such a multiple templates, but also from everywhere?

As I understand the situation, the last F2F's resolution is to remove content 
select entirely. That's not a proposal but rather the tentative consensus of 
the working group. If you'd like, we can initiate a formal CfC process to reach 
a consensus on this matter although I highly doubt the outcome will be 
different given the attendees of the meeting.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 5:12 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Mon, Apr 27, 2015 at 11:05 PM, Ryosuke Niwa rn...@apple.com wrote:
 The other thing I would like to explore is what an API would look like
 that does the subclassing as well.
 
 For the slot approach, we can model the act of filling a slot as if 
 attaching a shadow root to the slot and the slot content going into the 
 shadow DOM for both content distribution and filling of slots by subclasses.
 
 Now we can do this in either of the following two strategies:
 1. Superclass wants to see a list of slot contents from subclasses.
 2. Each subclass overrides previous distribution done by superclass by 
 inspecting insertion points in the shadow DOM and modifying them as needed.
 
 With the existence of closed shadow trees, it seems like you'd want to
 allow for the superclass to not have to share its details with the
 subclass.

Neither approach needs to expose internals of superclass' shadow DOM.  In 1, 
what superclass seems is a list of proxies of slot contents subclasses 
provided.  In 2, what subclass sees is a list of wrappers of overridable 
insertion points superclass defined.

I can't think of an inheritance model in any programming language in which 
overridable pieces are unknown to subclasses.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 5:12 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Mon, Apr 27, 2015 at 11:05 PM, Ryosuke Niwa rn...@apple.com wrote:
 I’m writing any kind of component that creates a shadow DOM, I’d just keep 
 references to all my insertion points instead of querying them each time I 
 need to distribute nodes.
 
 I guess that is true if you know you're not going to modify your
 insertion points or shadow tree. I would be happy to update the gist
 to exclude this parameter and instead use something like
 
  shadow.querySelector(content)
 
 somewhere. It doesn't seem important.

FYI, I've summarized everything we've discussed so far in 
https://gist.github.com/rniwa/2f14588926e1a11c65d3.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 6:00 AM, Domenic Denicola d...@domenic.me wrote:
 
 This essentially forces distribution to happen since you can observe the 
 result of distribution this way. Same with element.offsetWidth etc. And 
 that's not necessarily problematic,
 
 OK. So the claim that the current spec cannot be interoperably implemented is 
 false? (Not that I am a huge fan of content select, but I want to make sure 
 we have our arguments against it lined up and on solid footing.)
 
 but it is problematic if you want to do an imperative API as I tried to 
 explain in the bit you did not quote back.
 
 Sure, let's dig in to that claim now. Again, this is mostly clarifying 
 probing.
 
 Let's say we had an imperative API. As far as I understand from the gist, one 
 of the problems is when do we invoke the distributedCallback. If we use 
 MutationObserve time, then inconsistent states can be observed, etc.
 
 Why can't we say that this distributedCallback must be invoked at the same 
 time that the current spec updates the distribution result? Since it sounds 
 like there is no interop problem with this timing, I don't understand why 
 this wouldn't be an option.

There will be an interop problem. Consider a following example:

```js
someNode = ~
myButton.appendChild(someNode); // (1)
absolutelyPositionElement.offsetTop; // (2)
```

Now suppose absolutelyPositionElement.offsetTop is a some element that's in a 
disjoint subtree of the document. Heck, it could even in a separate iframe. In 
some UAs, (2) will trigger style resolution and update of the layout. Because 
UAs can't tell redistribution of myButton can affect (2), such UAs will update 
the distribution per spec text that says the distribution result must be 
updated before any _use_ of the distribution result.

Yet in other UAs, `offsetTop` may have been cached and UA might be smart enough 
to detect that (1) doesn't affect the result of 
`absolutelyPositionElement.offsetTop` because they're in a different parts of 
the tree and they're independent for the purpose of style resolution and 
layout. In such UAs, (2) does not trigger redistribution because it does not 
use the distribution result in order to compute this value.

In general, there are thousands of other DOM and CSS OM APIs that may or may 
not _use_ the distribution result depending on implementations.

- R. Niwa




Re: Inheritance Model for Shadow DOM Revisited

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 4:43 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Tue, Apr 28, 2015 at 7:09 PM, Ryosuke Niwa rn...@apple.com wrote:
 The problem with shadow as function is that the superclass implicitly 
 selects nodes based on a CSS selector so unless the nodes a subclass wants 
 to insert matches exactly what the author of superclass considered, the 
 subclass won't be able to override it. e.g. if the superclass had an 
 insertion point with select=input.foo, then it's not possible for a 
 subclass to then override it with, for example, an input element wrapped in 
 a span.
 
 So what if we flipped this as well and came up with an imperative API
 for shadow as a function. I.e. shadow as an actual function?
 Would that give us agreement?

We object on the basis that shadow as a function is fundamentally backwards 
way of doing the inheritance.  If you have a MyMapView and define a subclass 
MyScrollableMapView to make it scrollable, then MyScrollableMapView must be a 
MyMapView.  It doesn't make any sense for MyScrollableMapView, for example, to 
be a ScrollView that then contains MyMapView.  That's has-a relationship which 
is appropriate for composition.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 9:25 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 On Thu, Apr 30, 2015 at 8:57 PM, Ryosuke Niwa rn...@apple.com wrote:
 ...
 
  The return value of (2) is the same in either case. There is no observable 
  difference. No interop issue.
 
  Please file a bug for the spec with a concrete example if you can find a 
  observable difference due to the lazy-evaluation of the distribution.
 
 The problem isn't so much that the current shadow DOM specification has an 
 interop issue because what we're talking here, as the thread title clearly 
 communicates, is the imperative API for node distribution, which doesn't 
 exist in the current specification.
 
 In particular, invoking user code at the timing specified in section 3.4 
 which states if any condition which affects the distribution result 
 changes, the distribution result must be updated before any use of the 
 distribution result introduces a new interoperability issue because before 
 any use of the distribution result is implementation dependent.  e.g. 
 element.offsetTop may or not may use the distribution result depending on 
 UA.  Furthermore, it's undesirable to precisely spec this since doing so 
 will impose a serious limitation on what UAs could optimize in the future.
 
 
 element.offsetTop must use the distribution result, there's no way to know 
 what your style is without computing your distribution. This isn't any 
 different than getComputedStyle(...).color needing to flush style, or 
 getBoundingClientRect() needing to flush layout.

That is true only if the distribution of a given node can affect the style of 
element. There are cases in which UAs can deduce that such is not the case and 
optimize the style recalculation away. e.g. two elements belonging two 
different documents.

Another example will be element.isContentEditable. Under certain circumstances 
WebKit needs to resolve styles in order to determine the value of this function 
which, then, uses the distribution result.

 Distribution is about computing who your parent and siblings are in the box 
 tree, and where your should inherit your style from. Doing it lazy is not 
 going to be any worse in terms of interop than defining new properties that 
 depend on style.

The problem is that different engines have different mechanism to deduce style 
dependencies between elements.

- R. Niwa



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 9:01 PM, Hayato Ito hay...@chromium.org wrote:
 
 Thanks, however, we're talking about 
 https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0442.html.

Ah, I think there was some miscommunication there. I don't think anyone is 
claiming that the current spec results in interop issues. The currently spec'ed 
timing is only problematic when we try to invoke an author-defined callback at 
that moment. If we never added an imperative API or an imperative API we add 
don't invoke user code at the currently spec'ed timing, we don't have any 
interop problem.

- R. Niwa




Re: Inheritance Model for Shadow DOM Revisited

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 2:44 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
 On Apr 30, 2015, at 2:29 PM, Brian Kardell bkard...@gmail.com wrote:
 
 On Thu, Apr 30, 2015 at 2:00 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 On Apr 30, 2015, at 4:43 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Tue, Apr 28, 2015 at 7:09 PM, Ryosuke Niwa rn...@apple.com wrote:
 The problem with shadow as function is that the superclass implicitly 
 selects nodes based on a CSS selector so unless the nodes a subclass 
 wants to insert matches exactly what the author of superclass considered, 
 the subclass won't be able to override it. e.g. if the superclass had an 
 insertion point with select=input.foo, then it's not possible for a 
 subclass to then override it with, for example, an input element wrapped 
 in a span.
 
 So what if we flipped this as well and came up with an imperative API
 for shadow as a function. I.e. shadow as an actual function?
 Would that give us agreement?
 
 We object on the basis that shadow as a function is fundamentally 
 backwards way of doing the inheritance.  If you have a MyMapView and define 
 a subclass MyScrollableMapView to make it scrollable, then 
 MyScrollableMapView must be a MyMapView.  It doesn't make any sense for 
 MyScrollableMapView, for example, to be a ScrollView that then contains 
 MyMapView.  That's has-a relationship which is appropriate for composition.
 
 
 Is there really a hard need for inheritance over composition? Won't
 composition ability + an imperative API that allows you to properly
 delegate to the stuff you contain be just fine for a v1?
 
 Per resolutions in F2F last Friday, this is a discussion for v2 since we're 
 definitely not adding multiple generations of shadow DOM in v1.
 
 However, we should have a sound plan for inheritance in v2 and make sure our 
 imperative API is forward compatible with it. So the goal here is to come up 
 with some plan for inheritance so that we can be confident that our 
 inheritance API is not completely busted.

Sorry, I meant to say our *imperative* API is not completely busted.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Ryosuke Niwa
On Apr 30, 2015, at 8:17 PM, Hayato Ito hay...@chromium.org wrote:
 On Fri, May 1, 2015 at 2:59 AM Ryosuke Niwa rn...@apple.com wrote:
 
  On Apr 30, 2015, at 6:00 AM, Domenic Denicola d...@domenic.me wrote:
 
  This essentially forces distribution to happen since you can observe the 
  result of distribution this way. Same with element.offsetWidth etc. And 
  that's not necessarily problematic,
 
  OK. So the claim that the current spec cannot be interoperably implemented 
  is false? (Not that I am a huge fan of content select, but I want to 
  make sure we have our arguments against it lined up and on solid footing.)
 
  but it is problematic if you want to do an imperative API as I tried to 
  explain in the bit you did not quote back.
 
  Sure, let's dig in to that claim now. Again, this is mostly clarifying 
  probing.
 
  Let's say we had an imperative API. As far as I understand from the gist, 
  one of the problems is when do we invoke the distributedCallback. If we 
  use MutationObserve time, then inconsistent states can be observed, etc.
 
  Why can't we say that this distributedCallback must be invoked at the same 
  time that the current spec updates the distribution result? Since it 
  sounds like there is no interop problem with this timing, I don't 
  understand why this wouldn't be an option.
 
 There will be an interop problem. Consider a following example:
 
 
 The return value of (2) is the same in either case. There is no observable 
 difference. No interop issue.
 
 Please file a bug for the spec with a concrete example if you can find a 
 observable difference due to the lazy-evaluation of the distribution.

The problem isn't so much that the current shadow DOM specification has an 
interop issue because what we're talking here, as the thread title clearly 
communicates, is the imperative API for node distribution, which doesn't exist 
in the current specification.

In particular, invoking user code at the timing specified in section 3.4 which 
states if any condition which affects the distribution result changes, the 
distribution result must be updated before any use of the distribution result 
introduces a new interoperability issue because before any use of the 
distribution result is implementation dependent.  e.g. element.offsetTop may 
or not may use the distribution result depending on UA.  Furthermore, it's 
undesirable to precisely spec this since doing so will impose a serious 
limitation on what UAs could optimize in the future.

- R. Niwa




Re: Inheritance Model for Shadow DOM Revisited

2015-04-30 Thread Ryosuke Niwa

 On Apr 30, 2015, at 4:43 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Tue, Apr 28, 2015 at 7:09 PM, Ryosuke Niwa rn...@apple.com wrote:
 The problem with shadow as function is that the superclass implicitly 
 selects nodes based on a CSS selector so unless the nodes a subclass wants 
 to insert matches exactly what the author of superclass considered, the 
 subclass won't be able to override it. e.g. if the superclass had an 
 insertion point with select=input.foo, then it's not possible for a 
 subclass to then override it with, for example, an input element wrapped in 
 a span.
 
 So what if we flipped this as well and came up with an imperative API
 for shadow as a function. I.e. shadow as an actual function?

To start off, I can think of three major ways by which subclass wants to 
interact with its superclass:
1. Replace what superclass shows entirely by its own content - e.g. grab the 
device context and draw everything by yourself.
2. Override parts of superclass' content - e.g. subclass overrides virtual 
functions superclass provided to draw parts of the component/view.
3. Fill holes superclass provided - e.g. subclass implements abstract virtual 
functions superclass defined to delegate the work.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-29 Thread Ryosuke Niwa

 On Apr 29, 2015, at 4:37 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 
 On Wed, Apr 29, 2015 at 4:15 PM, Dimitri Glazkov dglaz...@google.com wrote:
 On Mon, Apr 27, 2015 at 8:48 PM, Ryosuke Niwa rn...@apple.com wrote:
 One thing that worries me about the `distribute` callback approach (a.k.a.
 Anne's approach) is that it bakes distribution algorithm into the platform
 without us having thoroughly studied how subclassing will be done upfront.
 
 Mozilla tried to solve this problem with XBS, and they seem to think what
 they have isn't really great. Google has spent multiple years working on
 this problem but they come around to say their solution, multiple
 generations of shadow DOM, may not be as great as they thought it would be.
 Given that, I'm quite terrified of making the same mistake in spec'ing how
 distribution works and later regretting it.
 
 At least the way I understand it, multiple shadow roots per element and
 distributions are largely orthogonal bits of machinery that solve largely
 orthogonal problems.
 
 Yes.  Distribution is mainly about making composition of components
 work seamlessly, so you can easily pass elements from your light dom
 into some components you're using inside your shadow dom.  Without
 distribution, you're stuck with either:

As I clarified my point in another email, neither I nor anyone else is 
questioning the value of the first-degree of node distribution from the light 
DOM into insertion points of a shadow DOM.  What I'm questioning is the value 
of the capability to selectively re-distribute those nodes in a tree with 
nested shadow DOMs.

 * components have to be explicitly written with the expectation of
 being composed into other components, writing their own content
 select *to target the content elements of the outer shadow*, which
 is also extremely terribad.

Could you give me a concrete use case in which such inspection of content 
elements in the light DOM is required without multiple generations of shadow 
DOM?  In all the use cases I've studied without multiple generations of shadow 
DOM, none required the ability to filter nodes inside a content element.

 Distribution makes composition *work*, in a fundamental way.  Without it, you 
 simply don't have the ability to use components inside of components except 
 in special cases.

Could you give us a concrete example in which selective re-distribution of 
nodes are required? That'll settle this discussion/question altogether.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-29 Thread Ryosuke Niwa

 On Apr 29, 2015, at 4:16 PM, Dimitri Glazkov dglaz...@google.com wrote:
 
 On Tue, Apr 28, 2015 at 1:52 PM, Ryosuke Niwa rn...@apple.com wrote:
 I've updated the gist to reflect the discussion so far:
 https://gist.github.com/rniwa/2f14588926e1a11c65d3
 
 Please leave a comment if I missed anything.
 
 Thank you for doing this. There are a couple of unescaped tags in 
 https://gist.github.com/rniwa/2f14588926e1a11c65d3#extention-to-custom-elements-for-consistency,
  I think?
 
 Any chance you could move it to the Web Components wiki? That way, we could 
 all collaborate.

Sure, what's the preferred work flow? Fork and push a PR?

- R. Niwa.




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-29 Thread Ryosuke Niwa

 On Apr 29, 2015, at 5:12 PM, Justin Fagnani justinfagn...@google.com wrote:
 
 Here's one case of redistribution: 
 https://github.com/Polymer/core-scaffold/blob/master/core-scaffold.html#L122
 
 Any time you see content inside a custom element it's potentially 
 redistribution. Here there's on that is (line 122), and one that could be 
 (line 116), and one that definitely isn't (line 106).

Thank you very much for an example. I'm assuming core-header-panel is [1]? It 
grabs core-toolbar. It looks to me that we could also replace line 122 with:

```html
content class=.core-header select=core-toolbar, .core-header/content
content select=*/content
```

and you wouldn't need redistribution. I wouldn't argue that it provides a 
better developer ergonomics but there's a serious trade off here.

If we natively supported redistribution and always triggered via `distribute` 
callback, then it may not be acceptable to invoke `distribute` on every DOM 
change in terms of performance since that could easily result in O(n^2) 
behavior. This is why the proposal we (Anne, I, and others) discussed involved 
using mutation observers instead of childrenChanged lifecycle callbacks.

Now, frameworks such as Polymer could provide a sugar on top of it by 
automatically re-distributing nodes as needed when implementing your select 
attribute.

 I personally think that Hayato's analogy to function parameters is very 
 motivating. Arguing from use-cases at this point is going to miss many things 
 because so far we've focused on the most simple of components, are having to 
 rewrite them for Polymer 0.8, and haven't seen the variety and complexity of 
 cases that will evolve naturally from the community. General expressiveness 
 is extremely important when you don't have an option to work around it - 
 redistribution is one of these cases.

Evaluating each design proposal based on a concrete use case is extremely 
important precisely because we might miss out on expressiveness in some cases 
as we're stripping down features, and we can't reject a proposal or add a 
feature for a hypothetical/theoretical need.

[1] 
https://github.com/Polymer/core-header-panel/blob/master/core-header-panel.html

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Ryosuke Niwa
On Apr 27, 2015, at 4:23 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Apr 27, 2015 at 4:06 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa rn...@apple.com wrote:
 On Apr 27, 2015, at 3:15 PM, Steve Orvell sorv...@google.com wrote:
 IMO, the appeal of this proposal is that it's a small change to the 
 current spec and avoids changing user expectations about the state of the 
 dom and can explain the two declarative proposals for distribution.
 
 It seems like with this API, we’d have to make O(n^k) calls where n is 
 the number of distribution candidates and k is the number of insertion 
 points, and that’s bad.  Or am I misunderstanding your design?
 
 I think you've understood the proposed design. As you noted, the cost is 
 actually O(n*k). In our use cases, k is generally very small.
 
 I don't think we want to introduce O(nk) algorithm. Pretty much every 
 browser optimization we implement these days are removing O(n^2) algorithms 
 in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because 
 we can't even theoretically optimize it away.
 
 You're aware, obviously, that O(n^2) is a far different beast than
 O(nk).  If k is generally small, which it is, O(nk) is basically just
 O(n) with a constant factor applied.
 
 To make it clear: I'm not trying to troll Ryosuke here.
 
 He argued that we don't want to add new O(n^2) algorithms if we can
 help it, and that we prefer O(n).  (Uncontroversial.)
 
 He then further said that an O(nk) algorithm is sufficiently close to
 O(n^2) that he'd similarly like to avoid it.  I'm trying to
 reiterate/expand on Steve's message here, that the k value in question
 is usually very small, relative to the value of n, so in practice this
 O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
 new O(n^2) algorithms may be mistargeted here.

Thanks for clarification. Just as Justin pointed out [1], one of the most 
important use case of imperative API is to dynamically insert as many insertion 
points as needed to wrap each distributed node.  In such a use case, this 
algorithm DOES result in O(n^2).

In fact, it could even result in O(n^3) behavior depending on how we spec it.  
If the user code had dynamically inserted insertion points one by one and UA 
invoked this callback function for each insertion point and each node.  If we 
didn't, then author needs a mechanism to let UA know that the condition by 
which insertion points select a node has changed and it needs to re-distribute 
all the nodes again.

- R. Niwa

[1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html 
https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html



Re: Inheritance Model for Shadow DOM Revisited

2015-04-28 Thread Ryosuke Niwa

 On Apr 27, 2015, at 9:50 PM, Hayato Ito hay...@chromium.org wrote:
 
 The feature of shadow as function supports *subclassing*. That's exactly 
 the motivation I've introduced it once in the spec (and implemented it in 
 blink). I think Jan Miksovsky, co-author of Apple's proposal, knows well that.

We're (and consequently I'm) fully aware of that feature/prosal, and we still 
don't think it adequately addresses the needs of subclassing.

The problem with shadow as function is that the superclass implicitly 
selects nodes based on a CSS selector so unless the nodes a subclass wants to 
insert matches exactly what the author of superclass considered, the subclass 
won't be able to override it. e.g. if the superclass had an insertion point 
with select=input.foo, then it's not possible for a subclass to then override 
it with, for example, an input element wrapped in a span.

 The reason I reverted it from the spec (and the blink), [1], is a technical 
 difficulty to implement, though I've not proved that it's impossible to 
 implement.

I'm not even arguing about the implementation difficulty. I'm saying that the 
semantics is inadequate for subclassing.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Ryosuke Niwa
I've updated the gist to reflect the discussion so far:
https://gist.github.com/rniwa/2f14588926e1a11c65d3 
https://gist.github.com/rniwa/2f14588926e1a11c65d3

Please leave a comment if I missed anything.

- R. Niwa



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Ryosuke Niwa

 On Apr 28, 2015, at 1:04 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 A distribute callback means running script any time we update distribution, 
 which is inside the style update phase (or event path computation phase, ...) 
 which is not a location we can run script.

That's not what Anne and the rest of us are proposing. That idea only came up 
in Steve's proposal [1] that kept the current timing of distribution.

 I also don't believe we should support distributing any arbitrary descendant, 
 that has a large complexity cost and doesn't feel like simplification. It 
 makes computing style and generating boxes much more complicated.

That certainly is a trade off. See a use case I outlined in [2].

 A synchronous childrenChanged callback has similar issues with when it's safe 
 to run script, we'd have to defer it's execution in a number of situations, 
 and it feels like a duplication of MutationObservers which specifically were 
 designed to operate in batch for better performance and fewer footguns (ex. a 
 naive childrenChanged based distributor will be n^2).

Since the current proposal is to add it as a custom element's lifecycle 
callback (i.e. we invoke it when we cross UA code / user code boundary), this 
shouldn't be an issue. If it is indeed an issue, then we have a problem with a 
lifecycle callback that gets triggered when an attribute value is modified.

In general, I don't think we can address Steve's need to make the consistency 
guarantee [3] without running some script either synchronously or as a 
lifecycle callback in the world of an imperative API.

- R. Niwa

[1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0342.html
[2] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0344.html
[3] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0357.html




Re: Inheritance Model for Shadow DOM Revisited

2015-04-28 Thread Ryosuke Niwa

 On Wed, Apr 29, 2015 at 2:09 AM Ryosuke Niwa rn...@apple.com wrote:
 
  On Apr 27, 2015, at 9:50 PM, Hayato Ito hay...@chromium.org wrote:
 
  The feature of shadow as function supports *subclassing*. That's 
  exactly the motivation I've introduced it once in the spec (and 
  implemented it in blink). I think Jan Miksovsky, co-author of Apple's 
  proposal, knows well that.
 
 We're (and consequently I'm) fully aware of that feature/prosal, and we 
 still don't think it adequately addresses the needs of subclassing.
 
 The problem with shadow as function is that the superclass implicitly 
 selects nodes based on a CSS selector so unless the nodes a subclass wants 
 to insert matches exactly what the author of superclass considered, the 
 subclass won't be able to override it. e.g. if the superclass had an 
 insertion point with select=input.foo, then it's not possible for a 
 subclass to then override it with, for example, an input element wrapped in 
 a span.
 
  The reason I reverted it from the spec (and the blink), [1], is a 
  technical difficulty to implement, though I've not proved that it's 
  impossible to implement.
 
 I'm not even arguing about the implementation difficulty. I'm saying that 
 the semantics is inadequate for subclassing.

 On Apr 28, 2015, at 10:34 AM, Hayato Ito hay...@chromium.org wrote:
 
 Could you help me to understand what implicitly means here?

I mean that the superclass’ insertion points use a CSS selector to select nodes 
to distribute. As a result, unless the subclass can supply the exactly kinds of 
nodes that matches the CSS selector, it won’t be able to override the contents 
into the insertion point.

 In this particular case, you might want to blame the super class's author and 
 tell the author, Please use content select=.input-foo so that subclass can 
 override it with arbitrary element with class=input-foo”.

The problem is that it may not be possible to coordinate across class hierarchy 
like that if the superclass was defined in a third party library. With the 
named slot approach, superclass only specifies the name of a slot, so subclass 
will be able to override it with whatever element it supplies as needed.

 Could you give me an concrete example which content slot can support, but 
 shadow as function can't support?

The problem isn’t so much that slot can do something shadow as function 
can’t support. It’s that shadow as function promotes over specification of 
what element can get into its insertion points by the virtue of using a CSS 
selector.

Now, it's possible that we can encourage authors to always use a class name in 
select attribute to support this use case. But then why are we adding a 
capability that we then discourage authors from using it.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 26, 2015, at 6:11 PM, Hayato Ito hay...@chromium.org wrote:
 
 I think Polymer folks will answer the use case of re-distribution.
 
 So let me just show a good analogy so that every one can understand 
 intuitively what re-distribution *means*.
 Let me use a pseudo language and define XComponent's constructor as follows:
 
 XComponents::XComponents(Title text, Icon icon) {
   this.text = text;
   this.button = new XButton(icon);
   ...
 }
 
 Here, |icon| is *re-distributed*.
 
 In HTML world, this corresponds the followings:
 
 The usage of x-component element:
   x-components
 x-textHello World/x-text
 x-iconMy Icon/x-icon
   /x-component
 
 XComponent's shadow tree is:
 
   shadow-root
 h1content select=x-text/content/h1!-- (1) --
 x-buttoncontent select=x-icon/content/x-button!-- (2) --
   /shadow-root

I have a question as to whether x-button then has to select which nodes to use 
or not.  In this particular example at least, x-button will put every node 
distributed into (2) into a single insertion point in its shadow DOM.

If we don't have to support filtering of nodes at re-distribution time, then 
the whole discussion of re-distribution is almost a moot because we can just 
treat a content element like any other element that gets distributed along with 
its distributed nodes.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 1:45 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
 On Apr 27, 2015, at 11:47 AM, Steve Orvell sorv...@google.com 
 mailto:sorv...@google.com wrote:
 
 Here's a minimal and hopefully simple proposal that we can flesh out if this 
 seems like an interesting api direction:
 
 https://gist.github.com/sorvell/e201c25ec39480be66aa 
 https://gist.github.com/sorvell/e201c25ec39480be66aa
 
 It seems like with this API, we’d have to make O(n^k)

I meant to say O(nk).  Sorry, I'm still waking up :(

 calls where n is the number of distribution candidates and k is the number of 
 insertion points, and that’s bad.  Or am I misunderstanding your design?
 
 
 We keep the currently spec'd distribution algorithm/timing but remove 
 `select` in favor of an explicit selection callback.
 
 What do you mean by keeping the currently spec’ed timing?  We certainly can’t 
 do it at “style resolution time” because style resolution is an 
 implementation detail that we shouldn’t expose to the Web just like GC and 
 its timing is an implementation detail in JS.  Besides that, avoiding style 
 resolution is a very important optimizations and spec’ing when it happens 
 will prevent us from optimizing it away in the future/
 
 Do you mean instead that we synchronously invoke this algorithm when a child 
 node is inserted or removed from the host?  If so, that’ll impose 
 unacceptable runtime cost for DOM mutations.
 
 I think the only timing UA can support by default will be at the end of micro 
 task or at UA-code / user-code boundary as done for custom element lifestyle 
 callbacks at the moment.
 
 The user simply returns true if the node should be distributed to the given 
 insertion point.
 
 Advantages:
  * the callback can be synchronous-ish because it acts only on a specific 
 node when possible. Distribution then won't break existing expectations 
 since `offsetHeight` is always correct.
 
 “always correct” is somewhat stronger statement than I would state here since 
 during UA calls these shouldDistributeToInsertionPoint callbacks, we'll 
 certainly see transient offsetHeight values.
 
 - R. Niwa
 



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 26, 2015, at 11:05 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa rn...@apple.com wrote:
 One major drawback of this API is computing insertionList is expensive
 because we'd have to either (where n is the number of nodes in the shadow
 DOM):
 
 Maintain an ordered list of insertion points, which results in O(n)
 algorithm to run whenever a content element is inserted or removed.
 Lazily compute the ordered list of insertion points when `distribute`
 callback is about to get called in O(n).
 
 The alternative is not exposing it and letting developers get hold of
 the slots. The rationale for letting the browser do it is because you
 need the slots either way and the browser should be able to optimize
 better.

I don’t think that’s true.  If you’re creating a custom element, you’re pretty 
much in the control of what goes into your shadow DOM.  I’m writing any kind of 
component that creates a shadow DOM, I’d just keep references to all my 
insertion points instead of querying them each time I need to distribute nodes.

Another important use case to consider is adding insertion points given the 
list of nodes to distribute.  For example, you may want to “wrap” each node you 
distribute by an element.  That requires the component author to know the 
number of nodes to distribute upfront and then dynamically create as many 
insertion points as needed.

 If we wanted to allow non-direct child descendent (e.g. grand child node) of
 the host to be distributed, then we'd also need O(m) algorithm where m is
 the number of under the host element.  It might be okay to carry on the
 current restraint that only direct child of shadow host can be distributed
 into insertion points but I can't think of a good reason as to why such a
 restriction is desirable.
 
 So you mean that we'd turn distributionList into a subtree? I.e. you
 can pass all descendants of a host element to add()? I remember Yehuda
 making the point that this was desirable to him.

Consider table-chart component which coverts a table element into a chart with 
each column represented as a line graph in the chart. The user of this 
component will wrap a regular table element with table-chart element to 
construct a shadow DOM:

```html
table-chart
  table
...
  td data-value=“253” data-delta=5253 ± 5/td
...
  /table
/table-chart
```

For people who like is attribute on custom elements, pretend it's
```html
  table is=table-chart
...
  td data-value=“253” data-delta=5253 ± 5/td
...
  /table
```

Now, suppose I wanted to show a tooltip with the value in the chart. One 
obvious way to accomplish this would be distributing the td corresponding to 
the currently selected point into the tooltip. But this requires us allowing 
non-direct child nodes to be distributed.


 The other thing I would like to explore is what an API would look like
 that does the subclassing as well. Even though we deferred that to v2
 I got the impression talking to some folks after the meeting that
 there might be more common ground than I thought.

For the slot approach, we can model the act of filling a slot as if attaching a 
shadow root to the slot and the slot content going into the shadow DOM for both 
content distribution and filling of slots by subclasses.

Now we can do this in either of the following two strategies:
1. Superclass wants to see a list of slot contents from subclasses.
2. Each subclass overrides previous distribution done by superclass by 
inspecting insertion points in the shadow DOM and modifying them as needed.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 11:47 AM, Steve Orvell sorv...@google.com wrote:
 
 Here's a minimal and hopefully simple proposal that we can flesh out if this 
 seems like an interesting api direction:
 
 https://gist.github.com/sorvell/e201c25ec39480be66aa 
 https://gist.github.com/sorvell/e201c25ec39480be66aa

It seems like with this API, we’d have to make O(n^k) calls where n is the 
number of distribution candidates and k is the number of insertion points, and 
that’s bad.  Or am I misunderstanding your design?

 
 We keep the currently spec'd distribution algorithm/timing but remove 
 `select` in favor of an explicit selection callback.

What do you mean by keeping the currently spec’ed timing?  We certainly can’t 
do it at “style resolution time” because style resolution is an implementation 
detail that we shouldn’t expose to the Web just like GC and its timing is an 
implementation detail in JS.  Besides that, avoiding style resolution is a very 
important optimizations and spec’ing when it happens will prevent us from 
optimizing it away in the future/

Do you mean instead that we synchronously invoke this algorithm when a child 
node is inserted or removed from the host?  If so, that’ll impose unacceptable 
runtime cost for DOM mutations.

I think the only timing UA can support by default will be at the end of micro 
task or at UA-code / user-code boundary as done for custom element lifestyle 
callbacks at the moment.

 The user simply returns true if the node should be distributed to the given 
 insertion point.
 
 Advantages:
  * the callback can be synchronous-ish because it acts only on a specific 
 node when possible. Distribution then won't break existing expectations since 
 `offsetHeight` is always correct.

“always correct” is somewhat stronger statement than I would state here since 
during UA calls these shouldDistributeToInsertionPoint callbacks, we'll 
certainly see transient offsetHeight values.

- R. Niwa



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 2:38 PM, Hayato Ito hay...@chromium.org wrote:
 
 On Tue, Apr 28, 2015 at 6:18 AM Ryosuke Niwa rn...@apple.com 
 mailto:rn...@apple.com wrote:
 
  On Apr 26, 2015, at 6:11 PM, Hayato Ito hay...@chromium.org 
  mailto:hay...@chromium.org wrote:
 
  I think Polymer folks will answer the use case of re-distribution.
 
  So let me just show a good analogy so that every one can understand 
  intuitively what re-distribution *means*.
  Let me use a pseudo language and define XComponent's constructor as follows:
 
  XComponents::XComponents(Title text, Icon icon) {
this.text = text;
this.button = new XButton(icon);
...
  }
 
  Here, |icon| is *re-distributed*.
 
  In HTML world, this corresponds the followings:
 
  The usage of x-component element:
x-components
  x-textHello World/x-text
  x-iconMy Icon/x-icon
/x-component
 
  XComponent's shadow tree is:
 
shadow-root
  h1content select=x-text/content/h1!-- (1) --
  x-buttoncontent select=x-icon/content/x-button!-- (2) --
/shadow-root
 
 I have a question as to whether x-button then has to select which nodes to 
 use or not.  In this particular example at least, x-button will put every 
 node distributed into (2) into a single insertion point in its shadow DOM.
 
 If we don't have to support filtering of nodes at re-distribution time, then 
 the whole discussion of re-distribution is almost a moot because we can just 
 treat a content element like any other element that gets distributed along 
 with its distributed nodes.
 
 
 x-button can select.
 You might want to take a look at the distribution algorithm [1], where the 
 behavior is well defined.

I know we can in the current spec but should we support it?  What are concrete 
use cases in which x-button or other components need to select nodes in nested 
shadow DOM case?

- R. Niwa



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 3:15 PM, Steve Orvell sorv...@google.com wrote:
 
 IMO, the appeal of this proposal is that it's a small change to the current 
 spec and avoids changing user expectations about the state of the dom and can 
 explain the two declarative proposals for distribution.
 
 It seems like with this API, we’d have to make O(n^k) calls where n is the 
 number of distribution candidates and k is the number of insertion points, 
 and that’s bad.  Or am I misunderstanding your design?
 
 I think you've understood the proposed design. As you noted, the cost is 
 actually O(n*k). In our use cases, k is generally very small.

I don't think we want to introduce O(nk) algorithm. Pretty much every browser 
optimization we implement these days are removing O(n^2) algorithms in the 
favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't 
even theoretically optimize it away.

 Do you mean instead that we synchronously invoke this algorithm when a child 
 node is inserted or removed from the host?  If so, that’ll impose 
 unacceptable runtime cost for DOM mutations.
 I think the only timing UA can support by default will be at the end of 
 micro task or at UA-code / user-code boundary as done for custom element 
 lifestyle callbacks at the moment.
 Running this callback at the UA-code/user-code boundary seems like it would 
 be fine. Running the more complicated distribute all the nodes proposals at 
 this time would obviously not be feasible. The notion here is that since 
 we're processing only a single node at a time, this can be done after an 
 atomic dom action.

Indeed, running such an algorithm each time node is inserted or removed will be 
quite expensive.

 “always correct” is somewhat stronger statement than I would state here 
 since during UA calls these shouldDistributeToInsertionPoint callbacks, 
 we'll certainly see transient offsetHeight values.
 
 Yes, you're right about that. Specifically it would be bad to try to read 
 `offsetHeight` in this callback and this would be an anti-pattern. If that's 
 not good enough, perhaps we can explore actually not working directly with 
 the node but instead the subset of information necessary to be able to decide 
 on distribution.

I'm not necessarily saying that it's not good enough.  I'm just saying that it 
is possible to observe such a state even with this API.

 Can you explain, under the initial proposal, how a user can ask an element's 
 dimensions and get the post-distribution answer? With current dom api's I can 
 be sure that if I do parent.appendChild(child) and then parent.offsetWidth, 
 the answer takes child into account. I'm looking to understand how we don't 
 violate this expectation when parent distributes. Or if we violate this 
 expectation, what is the proposed right way to ask this question?

You don't get that guarantee in the design we discussed on Friday [1] [2]. In 
fact, we basically deferred the timing issue to other APIs that observe DOM 
changes, namely mutation observers and custom elements lifecycle callbacks. 
Each component uses those APIs to call distribute().

 In addition to rendering information about a node, distribution also effects 
 the flow of events. So a similar question: when is it safe to call 
 child.dispatchEvent such that if parent distributes elements to its 
 shadowRoot, elements in the shadowRoot will see the event?

Again, the timing was deferred in [1] and [2] so it really depends on when each 
component decides to distribute.

- R. Niwa

[1] https://gist.github.com/rniwa/2f14588926e1a11c65d3
[2] https://gist.github.com/annevk/e9e61801fcfb251389ef




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 4:41 PM, Steve Orvell sorv...@google.com wrote:
 
 Again, the timing was deferred in [1] and [2] so it really depends on when 
 each component decides to distribute.
 
 I want to be able to create an element x-foo that acts like other dom 
 elements. This element uses Shadow DOM and distribution to encapsulate its 
 details.
 
 Let's imagine a 3rd party user named Bob that uses div and x-foo. Bob 
 knows he can call div.appendChild(element) and then immediately ask 
 div.offsetHeight and know that this height includes whatever the added 
 element should contribute to the div's height. Bob expects to be able to do 
 this with the x-foo element also since it is just another element from his 
 perspective.
 
 How can I, the author of x-foo, craft my element such that I don't violate 
 Bob's expectations? Does your proposal support this?

In order to support this use case, the author of x-foo must use some mechanism 
to observe changes to x-foo's child nodes and involve `distribute` 
synchronously.  This will become possible, for example, if we added 
childrenChanged lifecycle callback to custom elements.

That might be an acceptable mode of operations. If you wanted to synchronously 
update your insertion points, rely on custom element's lifecycle callbacks and 
you can only support direct children for distribution. Alternatively, if you 
wanted to support to distribute a non-direct-child descendent, just use 
mutation observers to do it at the end of a micro task.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 3:31 PM, Hayato Ito hay...@chromium.org wrote:
 
 I think there are a lot of user operations where distribution must be updated 
 before returning the meaningful result synchronously.
 Unless distribution result is correctly updated, users would take the dirty 
 result.

Indeed.

 For example:
 - element.offsetWidth:  Style resolution requires distribution. We must 
 update distribution, if it's dirty, before calculation offsetWidth 
 synchronously.
 - event dispatching: event path requires distribution because it needs a 
 composed tree.
 
 Can the current HTML/DOM specs are rich enough to explain the timing when the 
 imperative APIs should be run in these cases?

It certainly doesn't tell us when style resolution happens. In the case of 
event dispatching, it's impossible even in theory unless we somehow disallow 
event dispatching within our `distribute` callbacks since we can dispatch new 
events within the callbacks to decide to where a given node gets distributed. 
Given that, I don't think we should even try to make such a guarantee.

We could, however, make a slightly weaker guarantee that some level of 
conditions for the user code outside of `distribute` callbacks. For example, I 
can think of three levels (weakest to strongest) of self-consistent invariants:
1. every node is distributed to at most one insertion point.
2. all first-order distributions is up-to-date (redistribution may happen 
later).
3. all distributions is up-to-date.

 For me, the imperative APIs for distribution sounds very similar to the 
 imperative APIs for style resolution. The difficulties of both problems might 
 be similar.

We certainly don't want to (in fact, we'll object to) spec the timing for style 
resolution or what even style resolution means.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

 On Apr 27, 2015, at 5:43 PM, Steve Orvell sorv...@google.com wrote:
 
 That might be an acceptable mode of operations. If you wanted to 
 synchronously update your insertion points, rely on custom element's 
 lifecycle callbacks and you can only support direct children for 
 distribution. 
 
 That's interesting, thanks for working through it. Given a `childrenChanged` 
 callback, I think your first proposal `content.insertAt` and 
 `content.remove` best supports a synchronous mental model. As you note, 
 re-distribution is then the element author's responsibility. This would be 
 done by listening to the synchronous `distributionChanged` event. That seems 
 straightforward.
 
 Mutations that are not captured in childrenChanged that can affect 
 distribution would still be a problem, however. Given:
 
 div id=host
   div id=child/div
 /div
 
 child.setAttribute('slot', 'a');
 host.offsetHeight;
 
 Again, we are guaranteed that parent's offsetHeight includes any contribution 
 that adding the slot attribute caused (e.g. via a #child[slot=a] rule)
 
 If the `host` is a custom element that uses distribution, would it be 
 possible to have this same guarantee?
 
 x-foo id=host
   div id=child/div
 /x-foo
 
 child.setAttribute('slot', 'a');
 host.offsetHeight;

That's a good point. Perhaps we need to make childrenChanged optionally get 
called when attributes of child nodes are changed just like the way you can 
configure mutation observers to optionally monitor attribute changes.

- R. Niwa




Inheritance Model for Shadow DOM Revisited

2015-04-27 Thread Ryosuke Niwa
Note: Our current consensus is to defer this until v2.

 On Apr 27, 2015, at 9:09 PM, Hayato Ito hay...@chromium.org wrote:
 
 For the record, I, as a spec editor, still think Shadow Root hosts yet 
 another Shadow Root is the best idea among all ideas I've ever seen, with a 
 shadow as function, because it can explain everything in a unified way 
 using a single tree of trees, without bringing yet another complexity such as 
 multiple templates.
 
 Please see 
 https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22

That's a great mental model for multiple generations of shadow DOM but it 
doesn't solve any of the problems with API itself.  Like I've repeatedly stated 
in the past, the problem is the order of transclusion.  Quoting from [1],

The `shadow` element is optimized for wrapping a base class, not filling it 
in. In practice, no subclass ever wants to wrap their base class with 
additional user interface elements. A subclass is a specialization of a base 
class, and specialization of UI generally means adding specialized elements in 
the middle of a component, not wrapping new elements outside some inherited 
core.

In the three component libraries [1] described above, the only cases where a 
subclass uses `shadow` is if the subclass wants to add additional styling. 
That is, a subclass wants to override base class styling, and can do so via:

  ```
  template
stylesubclass styles go here/style
shadow/shadow
  /template
  ```

One rare exception is `core-menu` [3], which does add some components in a 
wrapper around a `shadow`. However, even in that case, the components in 
question are instances of `core-a11y-keys`, a component which defines 
keyboard shortcuts. That is, the component is not using this wrapper ability to 
add visible user interface elements, so the general point stands.

As with the above point, the fact that no practical component has need for this 
ability to wrap an older shadow tree suggests the design is solving a problem 
that does not, in fact, exist in practice.


[1] 
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
[2] Polymer’s core- elements, Polymer’s paper- elements, and the Basic Web 
Components’ collection of basic- elements
[3] 
https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FPolymer%2Fcore-menu%2Fblob%2Fmaster%2Fcore-menu.htmlsa=Dsntz=1usg=AFQjCNH0Rv14ENbplb6VYWFh8CsfVo9m_A

- R. Niwa




  1   2   3   4   5   6   >