Re: App-to-App interaction APIs - one more time, with feeling

2015-10-21 Thread Daniel Buchner
I believe you may be conflating two rather distinct code activities. What this 
API (or one like it) would provide is the ability for an app to form any sort 
of request, whether it originates via user input or just of its own code/needs, 
and have the user's preferred handler deal with the request.

You keep talking about copy/paste of a math formula as if it's a gotcha - but 
at this point, I have no idea how it affects this API. But to better 
understand, let me address your hypothetical in the way the proposed API would 
deal with it:

Let's say an app had a text input for math formulas, and the user entered one 
via typing, copy/paste, speech, whatever, it doesn't matter. The app itself 
doesn't know how to handle math formulas, so it creates a protocol worker to 
open a connection to the user's web+math provider. Let's imagine the user has 
preselected Wolfram Alpha. A connection would be opened between the app and 
Wolfram, the app then sends a request/payload to Wolfram, and Wolfram does 
whatever the generally agreed upon action is for handling this type of web+math 
request. Once Wolfram is finished, it sends back response data over the 
protocol handler connection, to the app.

I'm *really* trying to understand what you believe this API lacks for handling 
the flow listed above. To end the status quo of every site on the Web hard 
coding its activities to a few lucky providers who out spend other to win the 
dev marketing Olympics, you must have a mechanism in place that allows users to 
add, select, and manage providers, and connect to the user's providers for 
handling of associated activities - full stop.

Please consider carefully the above detailed explanation and let me know if 
there's anything left that's unclear.

- Daniel



On Wed, Oct 21, 2015 at 7:55 AM -0700, "Paul Libbrecht" 
<p...@hoplahup.net<mailto:p...@hoplahup.net>> wrote:

Hello Daniel,

Maybe things can be said like this: copy and paste lets you choose where you 
paste and what you paste, protocol handlers don't. Here's a more detailed 
answer.

With a mathematical formula information at hand, you can do a zillion things, 
assuming there's a single thing is not reasonable, even temporarily. For 
example, a very normal workflow could be the following: copy from a web-page, 
paste into a computation engine, adjust, derive, paste into a dynamic geometry 
tool, then paste one of the outputs into a mail.
Providing configurable protocol handlers, even to the finest grade, is not a 
solution to this workflow I feel.

Providing dialogs to ask the user where he wants the information at hand to be 
opened gets closer but there's still the idea of selection and cursor which 
protocol handlers do not seem to be ready to perform.

However, I believe that copy-and-paste (and drag-and-drop) is part of an 
app-to-app interaction APIs.

Paul


Daniel Buchner<mailto:dabuc...@microsoft.com>
20 octobre 2015 18:36
I’m trying to understand exactly why you see your example (“there was a person 
who invented a "math" protocol handler. For him it meant that formulæ be read 
out loud (because his mission is making the web accessible to people with 
disabilities including eyes) but clearly there was no way to bring a different 
target.”) as something this initiative is blocked by or cannot serve.

If you were to create a custom, community-led protocol definition for math 
equation handling, like web+math, apps would send a standard payload of 
semantic data, as defined here: 
http://schema.org/Code<https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fschema.org%2fCode=01%7c01%7cdabuchne%40microsoft.com%7c0a436061fdf14fb9153b08d2da279a00%7c72f988bf86f141af91ab2d7cd011db47%7c1=n0RE%2bC%2bPtCK1DhaEiKibXE%2b%2bTrQcGs1PZd1ppxU4%2bIg%3d>,
 and it would be handled by whatever app the user had installed to handle it. 
Given the handler at the other end is sending back data that would be displayed 
in the page, there’s no reason JAWS or any other accessibility app would be 
blocked from reading its output – on either side of the connection.

I can’t really make sense of this part of your email, can you clarify? --> 
“Somehow, I can't really be convinced by such a post except asking the user 
what is the sense of a given flavour or even protocol handler which, as we 
know, is kind of error-prone. Agree?” Asking the user what sense of a given 
protocol? Are you saying we can’t ask users what apps they want to have handle 
various actions? If so, we do this all the time, in every OS on the planet, and 
I wouldn’t say that simple process is error prone. Maybe I am misunderstanding 
you?

- Daniel

From: Paul Libbrecht [mailto:p...@hoplahup.net]
Sent: Sunday, October 18, 2015 9:38 AM
To: Daniel Buchner <dabuc...@microsoft.com><mailto:dabuc...@microsoft.com>
Cc: public-webapps@w3.org<mailto:public-webapps@w3.org>
Subject: Re: App-to-App interaction APIs - one more time, wit

RE: App-to-App interaction APIs - one more time, with feeling

2015-10-20 Thread Daniel Buchner
I'm trying to understand exactly why you see your example ("there was a person 
who invented a "math" protocol handler. For him it meant that formulæ be read 
out loud (because his mission is making the web accessible to people with 
disabilities including eyes) but clearly there was no way to bring a different 
target.") as something this initiative is blocked by or cannot serve.

If you were to create a custom, community-led protocol definition for math 
equation handling, like web+math, apps would send a standard payload of 
semantic data, as defined here: http://schema.org/Code, and it would be handled 
by whatever app the user had installed to handle it. Given the handler at the 
other end is sending back data that would be displayed in the page, there's no 
reason JAWS or any other accessibility app would be blocked from reading its 
output - on either side of the connection.

I can't really make sense of this part of your email, can you clarify? --> 
"Somehow, I can't really be convinced by such a post except asking the user 
what is the sense of a given flavour or even protocol handler which, as we 
know, is kind of error-prone. Agree?" Asking the user what sense of a given 
protocol? Are you saying we can't ask users what apps they want to have handle 
various actions? If so, we do this all the time, in every OS on the planet, and 
I wouldn't say that simple process is error prone. Maybe I am misunderstanding 
you?

- Daniel

From: Paul Libbrecht [mailto:p...@hoplahup.net]
Sent: Sunday, October 18, 2015 9:38 AM
To: Daniel Buchner <dabuc...@microsoft.com>
Cc: public-webapps@w3.org
Subject: Re: App-to-App interaction APIs - one more time, with feeling

Daniel,

as far as I can read the post, copy-and-paste-interoperability would be a 
"sub-task" of this.
It's not a very small task though.
In my world, E.g., there was a person who inventend a "math" protocol handler. 
For him it meant that formulæ be read out loud (because his mission is making 
the web accessible to people with disabilities including eyes) but clearly 
there was no way to bring a different target.

Somehow, I can't really be convinced by such a post except asking the user what 
is the sense of a given flavour or even protocol handler which, as we know, is 
kind of error-prone. Agree?

paul

PS: I'm still struggling for the geo URL scheme to be properly handled but it 
works for me in a very very tiny spectrum of apps (GMaps > 
Hand-edited-HTML-in-Mails-through-Postbox > Blackberry Hub > Osmand). This is 
certainly a good example of difficult sequence of choices.


Daniel Buchner<mailto:dabuc...@microsoft.com>
14 octobre 2015 18:33
Hey WebAppers,

Just ran into this dragon for the 1,326th time, so thought I would do a 
write-up to rekindle discussion on this important area of developer need the 
platform currently fails to address: 
http://www.backalleycoder.com/2015/10/13/app-to-app-interaction-apis/<https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fwww.backalleycoder.com%2f2015%2f10%2f13%2fapp-to-app-interaction-apis%2f=01%7c01%7cdabuchne%40microsoft.com%7cb93b5f788520457129d208d2d7da91a0%7c72f988bf86f141af91ab2d7cd011db47%7c1=IxbwQKRykbEKCDtFAsFUtEEETfQXt7XUsVxt7iGy6fw%3d>.
 We have existing APIs/specs that get relatively close, and my first instinct 
would be to leverage those and extend their capabilities to cover the broader 
family of use-cases highlighted in the post.

I welcome your ideas, feedback, and commentary,

- Daniel



RE: App-to-App interaction APIs - one more time, with feeling

2015-10-15 Thread Daniel Buchner
After publishing the post, Google has reached out and we’ve been discussing 
options for solving this – would you like those discussions to be on the ML, or 
back-channeled?

- Daniel

From: Samsung account [mailto:bnw6...@gmail.com]
Sent: Thursday, October 15, 2015 9:26 AM
To: Arthur Barstow <art.bars...@gmail.com>
Cc: Daniel Buchner <dabuc...@microsoft.com>; public-webapps@w3.org
Subject: Re: App-to-App interaction APIs - one more time, with feeling


2015/10/15 下午11:58於 "Arthur Barstow" 
<art.bars...@gmail.com<mailto:art.bars...@gmail.com>>寫道:
>
> On 10/14/15 12:33 PM, Daniel Buchner wrote:
>>
>>
>> Hey WebAppers,
>>
>> Just ran into this dragon for the 1,326^th time, so thought I would do a 
>> write-up to rekindle discussion on this important area of developer need the 
>> platform currently fails to address: 
>> http://www.backalleycoder.com/2015/10/13/app-to-app-interaction-apis/<https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fwww.backalleycoder.com%2f2015%2f10%2f13%2fapp-to-app-interaction-apis%2f=01%7c01%7cdabuchne%40microsoft.com%7cf6c3552c8f534236d50a08d2d57d5758%7c72f988bf86f141af91ab2d7cd011db47%7c1=lcuE8v13lcW8wpNqw4KUA4L2XP%2fJ4EfFqURrpJnME34%3d>.
>>  We have existing APIs/specs that get relatively close, and my first 
>> instinct would be to leverage those and extend their capabilities to cover 
>> the broader family of use-cases highlighted in the post.
>>
>> I welcome your ideas, feedback, and commentary,
>>
>
> Hi Daniel,
>
> In case you haven't done so already, perhaps the Web Platform Incubation 
> Group's Discourse service would be a "better" place to discuss your proposal 
> <http://discourse.wicg.io/<https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fdiscourse.wicg.io%2f=01%7c01%7cdabuchne%40microsoft.com%7cf6c3552c8f534236d50a08d2d57d5758%7c72f988bf86f141af91ab2d7cd011db47%7c1=3hIRe8Wwv%2bPKVVdH5HTOSzzbC76UxH6JEWmP9csIO2U%3d>>?
>
> --
> AB
>
>


Re: [HTML Imports]: Sync, async, -ish?

2013-11-27 Thread Daniel Buchner
Right on Dimitri, I couldn't agree more. It seems like an involved (but
highly beneficial) pursuit - but heck, maybe we'll find an answer quickly,
let's give it a shot!

Alex, I completely agree that declarative features should play a huge role
in the solution, and I love the power/granularity you're alluding to in
your proposal. WARNING: the following may be completely lol-batshit-crazy,
so be nice! (remember, I'm not *really *a CS person...I occasionally play
one on TV). What if we created something like this:

 head
   paint policy=blocking  *// non-blocking would be the default
policy*
 link rel=import href=first-load-components.html /
 script

  *// Some script here** that is required for initial setup of or
interaction*
*   // ** with the custom elements imported from
first-load-components.html*

/script
  /paint
/head

body

  section
*// content here is subject to default browser paint flow*
  /section

  aside
paint framerate=5

*// this content is essentially designated as low-priority,   // but
framerate=5 could also be treated as a lower-bound target.*
/paint
  /aside

/body


Here's what I intended in the example above:

   - A paint element would allow devs to easily, and explicitly, wrap
   multiple elements with their own paint settings. *(you could go also use
   attributes I suppose, but this way it is easy for someone new to the code
   to Jump Right In™) *
   - If there was a paint element, we could build-in a ton of tunable,
   high-precision features that are easy to manipulate from all contexts

I'm going to duck now - I anticipate things will soon be thrown at me.

- Daniel


On Wed, Nov 27, 2013 at 11:03 AM, Alex Russell slightly...@google.comwrote:

 On Wed, Nov 27, 2013 at 9:46 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Stepping back a bit, I think we're struggling to ignore the elephant in
 the room. This elephant is the fact that there's no specification (or API)
 that defines (or provides facilities to control) when rendering happens.
 And for that matter, what rendering means.

 The original reason why script blocks execution until imports are
 loaded was not even related to rendering. It was a simple solution to an
 ordering problem -- if I am inside a script block, I am assured that any
 script before it had also run (whether it came from imports or not). It's
 the same reason why ES modules need a new HTML element (or script type at
 the very list).

 Blocking rendering was as a side effect, since we simply took the
 plumbing from stylesheets.

 Then, events took a bewildering turn. Suddenly, this side effect turned
 into a feature/bug and now we're knee-deep in the sync-vs-async argument.
  And that's why all solutions look bad.

 With elements attribute, we're letting the user of the import pick the
 poison they prefer (would you like your page to be slow or would you rather
 it flash spastically?)

 With sync or async attribute, we're faced with an enormous
 responsibility of predicting the right default for a new feature. Might
 as well flip a coin there.

 I say we call out the elephant.


 Agree entirely. Most any time we get into a situation where the UA can't
 do the right thing it's because we're trying to have a debate without all
 the information. There's a big role for us to play in setting defaults one
 way or the other, particularly when they have knock-on optimization
 effects, but that's something we know how to do.


 We need an API to control when things appear on screen. Especially, when
 things _first_ appear on screen.


 +1000!!!

 I'll take a stab at it. To prevent running afoul of existing heuristics in
 runtimes regarding paint, I suggest this be declarative. That keeps us from
 blocking anything based on a script element. To get the engine into the
 right mode as early as possible, I also suggest it be an attribute on an
 early element (html, link, or meta). Using meta http-equiv=...
 gives us a hook into possibly exposing the switch as an HTTP header,
 although it makes any API less natural as we don't then have a place in the
 DOM to hang it from.

 In terms of API capabilities, we can cut this a couple of ways (not
 entirely exclusive):


1. Explicit paint control, all the time, every time. This is very
unlike the current model and, on pages that opt into it, would make them
entirely dependent on JS for getting things on screens.
   1. This opens up a question of scoping: should all paints be
   blocked? Only for some elements? Should layouts be delayed until paints 
 are
   requested? Since layouts are difficult to scope, what does paint scoping
   mean for them?
   2. An alternative might be a flag that's a one-time edge trigger:
   something that delays the *first* paint and, via an API, perhaps other
   upcoming paints, but which does not block the course of regular
   painting/layout.
   3. We would want to ensure that any API doesn't 

Re: [HTML Imports]: Sync, async, -ish?

2013-11-27 Thread Daniel Buchner
JJB, this is precisely why the paint concept seemed like a good idea to
me:

   - Easy to use in just one or two places if needed, without a steep cliff
  - The choice shouldn't be: either put up with the browser's default
  render flow, or become a low-level, imperative, perf hacker
  - Enables load/render/paint tuning of both graphical and non-visible,
   purely-functional elements
   - Flexible enough to allow for complex cases, while being (relatively)
   easy to grok for beginners
   - Doesn't require devs to juggle a mix of declarative, top-level
   settings, and imperative, per-element settings

- Daniel

On Wed, Nov 27, 2013 at 12:19 PM, John J Barton johnjbar...@johnjbarton.com
 wrote:

 I just can't help thinking this is whole line of reasoning all too
 complicated to achieve wide adoption and thus impact.

 The supposed power of declarative languages is ability to reason from top
 to bottom. Creating all of these exceptions causes the very problems being
 discussed: FOUC occurs because HTML Import runs async even though it looks
 like is it sync.  Then we patch that up with eg elements and paint.

 On the other hand, JS has allowed very sophisticated application loading
 to be implemented. If the async HTML Import were done with JS and if we
 added (if needed) rendering control support to JS, then we allow high
 function sites complete control of the loading sequence.

 I think we should be asking: what can we do to have the best chance that
 most sites will show reasonable default content while loading on mobile
 networks? A complex solution with confusing order of operations is fine
 for some sites, let them do it in JS. A declarative solution where default
 content appears before high-function content seems more likely to succeed
 for the rest. A complex declarative solution seems like the worst of both.
 HTH,
 jjb


 On Wed, Nov 27, 2013 at 11:50 AM, Daniel Buchner dan...@mozilla.comwrote:

 Right on Dimitri, I couldn't agree more. It seems like an involved (but
 highly beneficial) pursuit - but heck, maybe we'll find an answer quickly,
 let's give it a shot!

 Alex, I completely agree that declarative features should play a huge
 role in the solution, and I love the power/granularity you're alluding to
 in your proposal. WARNING: the following may be completely
 lol-batshit-crazy, so be nice! (remember, I'm not *really *a CS
 person...I occasionally play one on TV). What if we created something like
 this:

  head
paint policy=blocking  *// non-blocking would be the
 default policy*
  link rel=import href=first-load-components.html /
  script

   *// Some script here** that is required for initial setup of or
 interaction*
 *   // ** with the custom elements imported from
 first-load-components.html*

 /script
   /paint
 /head

 body

   section
  *// content here is subject to default browser paint flow*
   /section

   aside
 paint framerate=5

 *// this content is essentially designated as low-priority,   // but
 framerate=5 could also be treated as a lower-bound target.*
 /paint
   /aside

 /body


 Here's what I intended in the example above:

- A paint element would allow devs to easily, and explicitly, wrap
multiple elements with their own paint settings. *(you could go also
use attributes I suppose, but this way it is easy for someone new to the
code to Jump Right In™) *
- If there was a paint element, we could build-in a ton of tunable,
high-precision features that are easy to manipulate from all contexts

 I'm going to duck now - I anticipate things will soon be thrown at me.

 - Daniel


 On Wed, Nov 27, 2013 at 11:03 AM, Alex Russell slightly...@google.comwrote:

 On Wed, Nov 27, 2013 at 9:46 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Stepping back a bit, I think we're struggling to ignore the elephant in
 the room. This elephant is the fact that there's no specification (or API)
 that defines (or provides facilities to control) when rendering happens.
 And for that matter, what rendering means.

 The original reason why script blocks execution until imports are
 loaded was not even related to rendering. It was a simple solution to an
 ordering problem -- if I am inside a script block, I am assured that any
 script before it had also run (whether it came from imports or not). It's
 the same reason why ES modules need a new HTML element (or script type at
 the very list).

 Blocking rendering was as a side effect, since we simply took the
 plumbing from stylesheets.

 Then, events took a bewildering turn. Suddenly, this side effect turned
 into a feature/bug and now we're knee-deep in the sync-vs-async argument.
  And that's why all solutions look bad.

 With elements attribute, we're letting the user of the import pick
 the poison they prefer (would you like your page to be slow or would you
 rather it flash spastically?)

 With sync or async attribute, we're faced with an enormous

Re: [HTML Imports]: Sync, async, -ish?

2013-11-22 Thread Daniel Buchner
Personally I don't have any issues with this solution, it provides for the
use-cases we face. Also, it isn't without precedent - you can opt for a
sync XMLHttpRequest (not much different).

The best part of an explicit 'sync' attribute, is that we can now remove
the block if a script comes after an import condition, right Dimitri?

- Daniel
 On Nov 22, 2013 8:05 AM, John J Barton johnjbar...@johnjbarton.com
wrote:

 I agree that we should allow developers to set 'sync' attribute on link
 tags to block rendering until load. That will allow them to create sites
 that appear to load slowly rather than render their standard HTML/CSS.

 I think that the default should be the current solution and 'sync' should
 be opt-in. Developers may choose:
1. Do nothing. The site looks fine when it renders before the
 components arrive.
2. Add small static content fixes. The site looks fine after a few
 simple HTML / CSS adjustments.
3. Add 'sync', the site flashes too much, let it block.
 This progression is the best for users.

 jjb


 On Thu, Nov 21, 2013 at 5:04 PM, Steve Souders soud...@google.com wrote:

 DanielF: You would only list the custom tags that should be treated as
 blocking. If *every* tag in Brick and Polymer should be blocking, then we
 have a really big issue because right now they're NOT-blocking and there's
 nothing in Web Components per se to specify a blocking behavior.

 JJB: Website owners aren't going to be happy with either situation:
   - If custom tags are async (backfilled) by default and the custom tag
 is a critical part of the page, subjecting users to a page that suddenly
 changes layout isn't good.
   - If custom tags (really HTML imports) are sync (block rendering) by
 default, then users stare at a blank screen during slow downloads.

 I believe we need to pick the best default while also giving developers
 the ability to choose what's best for them. Right now I don't see a way for
 a developer to choose to have a custom element block rendering, as opposed
 to be backfilled later. Do we think this is important? (I think so.) If so,
 what's a good way to let web devs make custom elements block?

 -Steve



 On Thu, Nov 21, 2013 at 3:07 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Ok, so my 2 cents: it's ok but it gives a very Web 1.0 solution. We had
 to invent AJAX so developers could control the user experience in the face
 of significant network delay. As I said earlier, most apps will turn this
 problem over to the design team rather than cause users to leave while the
 browser spins waiting for the page to render.


 On Thu, Nov 21, 2013 at 3:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 Yes, that's the primary motivation. Getting FUC'd is going to be a
 non-starter for serious app developers. We were just thinking of ways to
 satisfy the use-case without undue burden.







Re: [HTML Imports]: Sync, async, -ish?

2013-11-22 Thread Daniel Buchner
I'm not talking about the script blocking as usual - I'm referencing the
presence of a script causing the import to block until completed, when the
script follows it.


On Fri, Nov 22, 2013 at 8:57 AM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Fri, Nov 22, 2013 at 8:22 AM, Daniel Buchner dan...@mozilla.comwrote:

 Personally I don't have any issues with this solution, it provides for
 the use-cases we face. Also, it isn't without precedent - you can opt for a
 sync XMLHttpRequest (not much different).

 The best part of an explicit 'sync' attribute, is that we can now remove
 the block if a script comes after an import condition, right Dimitri?

 As far as I know, script already blocks rendering and I don't think even
 Dimitri can change that ;-)  Blocking script until HTML Import succeeds is
 not needed as we discussed earlier: scripts that want to run after Import
 already have an effective and well known mechanism to delay execution,
 listening for load events.


 - Daniel
  On Nov 22, 2013 8:05 AM, John J Barton johnjbar...@johnjbarton.com
 wrote:

 I agree that we should allow developers to set 'sync' attribute on
 link tags to block rendering until load. That will allow them to create
 sites that appear to load slowly rather than render their standard
 HTML/CSS.

 I think that the default should be the current solution and 'sync'
 should be opt-in. Developers may choose:
1. Do nothing. The site looks fine when it renders before the
 components arrive.
2. Add small static content fixes. The site looks fine after a few
 simple HTML / CSS adjustments.
3. Add 'sync', the site flashes too much, let it block.
 This progression is the best for users.

 jjb


 On Thu, Nov 21, 2013 at 5:04 PM, Steve Souders soud...@google.comwrote:

 DanielF: You would only list the custom tags that should be treated as
 blocking. If *every* tag in Brick and Polymer should be blocking, then we
 have a really big issue because right now they're NOT-blocking and there's
 nothing in Web Components per se to specify a blocking behavior.

 JJB: Website owners aren't going to be happy with either situation:
   - If custom tags are async (backfilled) by default and the custom tag
 is a critical part of the page, subjecting users to a page that suddenly
 changes layout isn't good.
   - If custom tags (really HTML imports) are sync (block rendering) by
 default, then users stare at a blank screen during slow downloads.

 I believe we need to pick the best default while also giving developers
 the ability to choose what's best for them. Right now I don't see a way for
 a developer to choose to have a custom element block rendering, as opposed
 to be backfilled later. Do we think this is important? (I think so.) If so,
 what's a good way to let web devs make custom elements block?

 -Steve



 On Thu, Nov 21, 2013 at 3:07 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Ok, so my 2 cents: it's ok but it gives a very Web 1.0 solution. We
 had to invent AJAX so developers could control the user experience in the
 face of significant network delay. As I said earlier, most apps will turn
 this problem over to the design team rather than cause users to leave 
 while
 the browser spins waiting for the page to render.


 On Thu, Nov 21, 2013 at 3:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 Yes, that's the primary motivation. Getting FUC'd is going to be a
 non-starter for serious app developers. We were just thinking of ways to
 satisfy the use-case without undue burden.








Re: [HTML Imports]: Sync, async, -ish?

2013-11-22 Thread Daniel Buchner
Of course I realize this Jonas, but I assure you, if you burden the most
common use-cases with poor ergonomics, developers will find even more
ghastly ways to degrade perf. Can someone post an overview of the proposed
solutions, and how they apply to the use-cases stated a few posts back?


On Fri, Nov 22, 2013 at 9:05 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Nov 22, 2013 8:24 AM, Daniel Buchner dan...@mozilla.com wrote:
 
  Personally I don't have any issues with this solution, it provides for
 the use-cases we face. Also, it isn't without precedent - you can opt for a
 sync XMLHttpRequest (not much different).

 Except that sync XHR is considered one of the great miss designs of the
 web and is causing huge UI issues for users and great pain for developers.
 If we had the opportunity we would remove it in a heartbeat.

 / Jonas



Re: [HTML Imports]: Sync, async, -ish?

2013-11-21 Thread Daniel Buchner
Steve and I talked at the Chrome Dev Summit today and generated an idea
that may align the stars for our async/sync needs:

link rel=import elements=x-foo, x-bar /

The idea is that imports are always treated as async, unless the developer
opts-in to blocking based on the presence of specific tags. If the parser
finds custom elements in the page that match user-defined elements tag
names, it would block rendering until the associated link import has
finished loading and registering the containing custom elements.

Thoughts?

- Daniel


On Wed, Nov 20, 2013 at 11:19 AM, Daniel Buchner dan...@mozilla.com wrote:


 On Nov 20, 2013 11:07 AM, John J Barton johnjbar...@johnjbarton.com
 wrote:
 
 
 
 
  On Wed, Nov 20, 2013 at 10:41 AM, Daniel Buchner dan...@mozilla.com
 wrote:
 
  Dimitri: right on.
 
  The use of script-after-import is the forcing function in the blocking
 scenario, not imports.
 
  Yes.
 
  Let's not complicate the new APIs and burden the overwhelming use-case
 to service folks who intend to use the technology in alternate ways.
 
  I  disagree, but happily the current API seems to handle the alternative
 just fine. The case Steve raise is covered and IMO correctly, now that you
 have pointed out that link supports load event. His original example must
 block and if he wants it not to block it's on him to hook the load event.
 
  For my bit, as long as the size of the components I include are not
 overly large, I want them to load before the first render and avoid getting
 FUCd or having to write a plethora of special CSS for the not-yet-upgraded
 custom element case.
 
  According to my understanding, you are likely to be disappointed: the
 components are loaded asynchronously and on a slow network with a fast
 processor we will render page HTML before the component arrives.  We should
 expect this to be the common case for the foresable future.
 

 There is, of course, the case of direct document.register() invocation
 from a script tag, which will/should block to ensure all elements in
 original source are upgraded. My only point, is that we need to be
 realistic - both cases are valid and there are good reasons for each.

 Might we be able to let imports load async, even when a script proceeds
 them, if we added a *per component type* upgrade event? (note: I'm not
 talking about a perf-destroying per component instance event)

  jjb
 
  Make the intended/majority case easy, and put the onus on the less
 common cases to think about more complex asset arrangement.
 
  - Daniel
 
  On Nov 20, 2013 10:22 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
  John's commentary just triggered a thought in my head. We should stop
 saying that HTML Imports block rendering. Because in reality, they don't.
 It's the scripts that block rendering.
 
  Steve's argument is not about HTML Imports needing to be async. It's
 about supporting legacy content with HTML Imports. And I have a bit less
 sympathy for that argument.
 
  You can totally build fully asynchronous HTML Imports-based documents,
 if you follow these two simple rules:
  1) Don't put scripts after imports in main document
  2) Use custom elements
 
  As an example:
 
  index.html:
  link rel=import href=my-ad.html
  ...
  my-ad/my-ad
  ..
 
  my-ad.html:
  script
  document.register(my-ad, ... );
  ...
 
  There won't be any rendering blocked here. The page will render, then
 when my-add.html loads, it will upgrade the my-ad element to display the
 punch-the-monkey thing.
 
  :DG
 
 



Re: [HTML Imports]: Sync, async, -ish?

2013-11-21 Thread Daniel Buchner
Yes, that's the primary motivation. Getting FUC'd is going to be a
non-starter for serious app developers. We were just thinking of ways to
satisfy the use-case without undue burden.


Re: [HTML Imports]: Sync, async, -ish?

2013-11-21 Thread Daniel Buchner
 I don't see this solution scaling at all.

 Imports are a tree. If you have any import that includes any other import,
 now the information about what tags to wait for has to be duplicated
 along every node in that tree.

 If a library author chooses to make any sort of all-in-one import to
 reduce network requests, they will have an absurdly huge list.

 For example:

 Brick

 link rel=import href=brick.html elements=x-tag-appbar x-tag-calendar
 x-tag-core x-tag-deck x-tag-flipbox x-tag-layout x-tag-slidebox
 x-tag-slider x-tag-tabbar x-tag-toggle x-tag-tooltip

 or

 Polymer

 link rel=import href=components/polymer-elements.html
 elements=polymer-ajax polymer-anchor-point polymer-animation
 polymer-collapse polymer-cookie polymer-dev polymer-elements
 polymer-expressions polymer-file polymer-flex-layout polymer-google-jsapi
 polymer-grid-layout polymer-jsonp polymer-key-helper polymer-layout
 polymer-list polymer-localstorage polymer-media-query polymer-meta
 polymer-mock-data polymer-overlay polymer-page polymer-scrub
 polymer-sectioned-list polymer-selection polymer-selector
 polymer-shared-lib polymer-signals polymer-stock polymer-ui-accordion
 polymer-ui-animated-pages polymer-ui-arrow polymer-ui-breadcrumbs
 polymer-ui-card polymer-ui-clock polymer-ui-collapsible polymer-ui-elements
 polymer-ui-field polymer-ui-icon polymer-ui-icon-button
 polymer-ui-line-chart polymer-ui-menu polymer-ui-menu-button
 polymer-ui-menu-item polymer-ui-nav-arrow polymer-ui-overlay
 polymer-ui-pages polymer-ui-ratings polymer-ui-scaffold polymer-ui-sidebar
 polymer-ui-sidebar-header polymer-ui-sidebar-menu polymer-ui-splitter
 polymer-ui-stock polymer-ui-submenu-item polymer-ui-tabs
 polymer-ui-theme-aware polymer-ui-toggle-button polymer-ui-toolbar
 polymer-ui-weather polymer-view-source-link


That's tad hyperbolic Other Daniel (I kid, I kid), you could just as soon
use an all value to block until all custom elements were ready. The
blocking-tag-name-list may be a poor idea, I have no illusions otherwise -
I don't even like it! -- To be honest, I believe all these one-off
deviations from developer expectations are the wrong answer. I'd much
rather stick with a sync loading import that offered an 'async' attribute
to force async loading, regardless of whether a script tag came after it or
not.

How about we review the use-cases again to get our bearings:

   1. Loading a bundle custom elements for the page
   2. Loading just custom elements required for the first view and load the
   rest async
   3. Loading template elements for use in the page's lifecycle
   4. Loading a random HTML file for random uses

#1 is probably going to be the most common use-case. Regardless of what you
elect to do with load juggling, developers are going to do whatever they
need to to ensure they aren't FUC'd. Hell, if you shake that beehive
enough, they'll just start using a script to sync load
document.register() declarations. This is not optimal, the best
practice/solution should be to sync load only the assets/declarations that
are vital to the first render.

#2 Yay, developers split their imports between two link tags, one
containing the tags required for first render, and another with everything
else they don't need until later. If we offered devs an easy way to choose
between sync/async imports, it would afford them good ergonomics and good
perf (for most use-cases in this sphere). How feasible is this? How often
will devs take our advice and do it? Not sure, I guess it would depend on
how hard we push import best practices.

#3 This will be a super common use-case - devs will do this even when they
aren't using Custom Elements. The issue here is much like the one present
in case #1 - how likely are devs to split their template imports into two
tags, one for first render and one for the rest?

#4 I really have no idea how often devs using imports will use it to load
random HTML files, but I'd venture this use-case generally will not depend
on synchronous import loading.

Those (imo) are the primary use-cases we're dealing with. Can we focus on
making things transparent and not burdening the most common case among them?

As for JJB's assertion that most apps will turn this problem over to the
design team rather than cause users to leave while the browser spins
waiting for the page to render. - I couldn't disagree more, this is a
fantasy (no offense intended). Just survey many of the most popular apps on
any platform - they spin you until the assets and code are ready to show
the first screen.

- Daniel


Re: [HTML Imports]: Sync, async, -ish?

2013-11-20 Thread Daniel Buchner
Dimitri: right on.

The use of script-after-import is the forcing function in the blocking
scenario, not imports. Let's not complicate the new APIs and burden the
overwhelming use-case to service folks who intend to use the technology in
alternate ways.

For my bit, as long as the size of the components I include are not overly
large, I want them to load before the first render and avoid getting FUCd
or having to write a plethora of special CSS for the not-yet-upgraded
custom element case.

Make the intended/majority case easy, and put the onus on the less common
cases to think about more complex asset arrangement.

- Daniel
 On Nov 20, 2013 10:22 AM, Dimitri Glazkov dglaz...@google.com wrote:

 John's commentary just triggered a thought in my head. We should stop
 saying that HTML Imports block rendering. Because in reality, they don't.
 It's the scripts that block rendering.

 Steve's argument is not about HTML Imports needing to be async. It's about
 supporting legacy content with HTML Imports. And I have a bit less sympathy
 for that argument.

 You can totally build fully asynchronous HTML Imports-based documents, if
 you follow these two simple rules:
 1) Don't put scripts after imports in main document
 2) Use custom elements

 As an example:

 index.html:
 link rel=import href=my-ad.html
 ...
 my-ad/my-ad
 ..

 my-ad.html:
 script
 document.register(my-ad, ... );
 ...

 There won't be any rendering blocked here. The page will render, then when
 my-add.html loads, it will upgrade the my-ad element to display the
 punch-the-monkey thing.

 :DG



Re: [HTML Imports]: Sync, async, -ish?

2013-11-20 Thread Daniel Buchner
On Nov 20, 2013 11:07 AM, John J Barton johnjbar...@johnjbarton.com
wrote:




 On Wed, Nov 20, 2013 at 10:41 AM, Daniel Buchner dan...@mozilla.com
wrote:

 Dimitri: right on.

 The use of script-after-import is the forcing function in the blocking
scenario, not imports.

 Yes.

 Let's not complicate the new APIs and burden the overwhelming use-case
to service folks who intend to use the technology in alternate ways.

 I  disagree, but happily the current API seems to handle the alternative
just fine. The case Steve raise is covered and IMO correctly, now that you
have pointed out that link supports load event. His original example must
block and if he wants it not to block it's on him to hook the load event.

 For my bit, as long as the size of the components I include are not
overly large, I want them to load before the first render and avoid getting
FUCd or having to write a plethora of special CSS for the not-yet-upgraded
custom element case.

 According to my understanding, you are likely to be disappointed: the
components are loaded asynchronously and on a slow network with a fast
processor we will render page HTML before the component arrives.  We should
expect this to be the common case for the foresable future.


There is, of course, the case of direct document.register() invocation from
a script tag, which will/should block to ensure all elements in original
source are upgraded. My only point, is that we need to be realistic - both
cases are valid and there are good reasons for each.

Might we be able to let imports load async, even when a script proceeds
them, if we added a *per component type* upgrade event? (note: I'm not
talking about a perf-destroying per component instance event)

 jjb

 Make the intended/majority case easy, and put the onus on the less
common cases to think about more complex asset arrangement.

 - Daniel

 On Nov 20, 2013 10:22 AM, Dimitri Glazkov dglaz...@google.com wrote:

 John's commentary just triggered a thought in my head. We should stop
saying that HTML Imports block rendering. Because in reality, they don't.
It's the scripts that block rendering.

 Steve's argument is not about HTML Imports needing to be async. It's
about supporting legacy content with HTML Imports. And I have a bit less
sympathy for that argument.

 You can totally build fully asynchronous HTML Imports-based documents,
if you follow these two simple rules:
 1) Don't put scripts after imports in main document
 2) Use custom elements

 As an example:

 index.html:
 link rel=import href=my-ad.html
 ...
 my-ad/my-ad
 ..

 my-ad.html:
 script
 document.register(my-ad, ... );
 ...

 There won't be any rendering blocked here. The page will render, then
when my-add.html loads, it will upgrade the my-ad element to display the
punch-the-monkey thing.

 :DG




Re: [webcomponents] Per-type ready event for Custom Elements

2013-09-28 Thread Daniel Buchner
WebComponentsReady is an event in the Polymer stack that fires when all
known registered custom elements are ready for interaction - X-Tag has
similar event (now just a hook into the polyfill's) called
DOMComponentsLoaded. Being notified that all elements in source are ready
for interaction is essential at any point in the page lifecycle when a new
element is defined.

This is not an X-Tag issue, it's a Web Components issue - one we need to
address.


Re: [webcomponents] Per-type ready event for Custom Elements

2013-09-27 Thread Daniel Buchner
I don't see any compelling reason not to provide both. Let's not mistake an
appeal for a simple, backwards-compatible allowance, as a slight to
Promises ;)


On Fri, Sep 27, 2013 at 7:38 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, Sep 27, 2013 at 1:52 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  While Promises would address this concern, I'm reluctant to go with that
  solution because it imposes yet-another-polyfill-dependency on the web
  component polyfills/libs.

 That seems fine. Most new APIs require that polyfill. We're here to
 design the future of the platform and to do that we should take into
 account the lessons we've learned along the way.


 --
 http://annevankesteren.nl/



Re: [webcomponents] Per-type ready event for Custom Elements

2013-09-27 Thread Daniel Buchner
Surely then we should remove all events defined by the Web Component specs
in favor of Promises, right? We're talking about a single event here - this
seems like a bit of an overreaction. Though Promises are cool, and I am not
against providing a complementary solution, you haven't presented anything
of significant, material relevance that should dissuade us from providing
backwards compatibility in the style of code developers are familiar with
and able to use today.

I believe providing for today and tomorrow with this API addition is a
sensible move for the following reasons:

- Doesn't force reliance on another emerging API
- Provides a hook to developers through an existing code path they are
familiar with
- Doesn't force developers to treat this one event as the odd-one-out by
requiring different handling
- Has a relatively low touch implementation path (as far as I have
estimated)

Your mention of tiny, incremental maintenance cost is inconsequential - the
entire Web Components family of APIs will require far more maintenance
(this is a red herring). The arguments a small thing will need to be
maintained and let's use newer, unreleased stuff because it is better,
are not altogether convincing points (which is of course my opinion, I
welcome yours).


On Fri, Sep 27, 2013 at 8:02 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, Sep 27, 2013 at 10:56 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I don't see any compelling reason not to provide both.

 Twice the maintenance cost, more to learn, etc. Promises will be in
 implementations long before web components are stable anyway. I don't
 really think there's much of a point to be made here.


 --
 http://annevankesteren.nl/



[webcomponents] Per-type ready event for Custom Elements

2013-09-26 Thread Daniel Buchner
We're seeing issues with custom element ready state awareness under various
common async load patterns like AMD, CommonJS, etc. Essentially, when a
developer brings in their definitions via one of these systems, the
DOMComponentsLoaded/WebComponentsReady event has already fired, leaving
them with race conditions. There was previously an event that fired for
_every_ element node when each one was ready, and it was pulled due to
various feasibility issues (which is understandable). The proposal here is
different: fire one event (customelementready?) when all known in-source
elements of a type/name are parsed and ready for interaction, *regardless
of when that occurs*.

The use case here is simple:

Let's say a dev defines a new custom element with document.register() 10
minutes after the page is loaded. Unphased by the fashionably late arrival
of a new custom element definition, the parser crawls through the in-source
elements, augments any matching nodes, and fires a single event when
finished with the lot of them. There would be a property on the event
(tagName, customElementName?) to inform the developer as to what type of
custom element was ready for interaction.

Make sense? Thoughts? (do we already have this covered some other way?)

- Daniel


Re: [webcomponents] Per-type ready event for Custom Elements

2013-09-26 Thread Daniel Buchner
While Promises would address this concern, I'm reluctant to go with that
solution because it imposes yet-another-polyfill-dependency on the web
component polyfills/libs. If Promises had preceded Web Components by a few
years, and were presently at 80-90% penetration, it would be a different
story - but as it stands, I don't think it is the most sensible route.


On Thu, Sep 26, 2013 at 10:35 PM, Domenic Denicola 
dome...@domenicdenicola.com wrote:

 I am not sure I entirely understand the problem, but generally ready
 events should be promises instead, since they allow you to subscribe even
 after the promise has been fulfilled and then still get called back. That
 sounds like it would be helpful for those users.

 I believe the new font load events spec is making good use of them for
 exactly this purpose.

  On 27 Sep 2013, at 01:29, Daniel Buchner dan...@mozilla.com wrote:
 
 
  We're seeing issues with custom element ready state awareness under
 various common async load patterns like AMD, CommonJS, etc. Essentially,
 when a developer brings in their definitions via one of these systems, the
 DOMComponentsLoaded/WebComponentsReady event has already fired, leaving
 them with race conditions. There was previously an event that fired for
 _every_ element node when each one was ready, and it was pulled due to
 various feasibility issues (which is understandable). The proposal here is
 different: fire one event (customelementready?) when all known in-source
 elements of a type/name are parsed and ready for interaction, *regardless
 of when that occurs*.
 
  The use case here is simple:
 
  Let's say a dev defines a new custom element with document.register() 10
 minutes after the page is loaded. Unphased by the fashionably late arrival
 of a new custom element definition, the parser crawls through the in-source
 elements, augments any matching nodes, and fires a single event when
 finished with the lot of them. There would be a property on the event
 (tagName, customElementName?) to inform the developer as to what type of
 custom element was ready for interaction.
 
  Make sense? Thoughts? (do we already have this covered some other way?)
 
  - Daniel




Re: element Needs A Beauty Nap

2013-08-13 Thread Daniel Buchner
I concur. On hold doesn't mean forever, and the imperative API affords us
nearly identical feature capability. Nailing the imperative and getting the
APIs to market is far more important to developers at this point.
On Aug 12, 2013 4:46 PM, Alex Russell slightly...@google.com wrote:

 As discussed face-to-face, I agree with this proposal. The declarative
 form isn't essential to the project of de-sugaring the platform and can be
 added later when we get agreement on what the right path forward is.
 Further, polymer-element is evidence that it's not even necessary so long
 as we continue to have the plumbing for loading content that is HTML
 Imports.

 +1


 On Mon, Aug 12, 2013 at 4:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 tl;dr: I am proposing to temporarily remove declarative custom element
 syntax (aka element) from the spec. It's broken/dysfunctional as
 spec'd and I can't see how to fix it in the short term.

 We tried. We gave it a good old college try. In the end, we couldn't
 come up with an element syntax that's both functional and feasible.

 A functional element would:

 1) Provide a way to declare new or extend existing HTML/SVG elements
 using markup
 2) Allow registering prototype/lifecycle callbacks, both inline and out
 3) Be powerful enough for developers to prefer it over document.register

 A feasible element would:

 1) Be intuitive to use
 2) Have simple syntax and API surface
 3) Avoid complex over-the-wire dependency resolution machinery

 You've all watched the Great Quest unfold over in public-webapps over
 the last few months.

 The two key problems that still remain unsolved in this quest are:

 A. How do we integrate the process of creating a custom element
 declaration [1] with the process of creating a prototype registering
 lifecycle callbacks?

 B. With HTML Imports [2], how do we ensure that the declaration of a
 custom element is loaded after the declaration of the custom element
 it extends? At the very least, how do we enable developers to reason
 about dependency failures?

 We thought we solved problem A first with the incredible this [3],
 and then with the last completion value [4], but early experiments
 are showing that this last completion value technique produces brittle
 constructs, since it forces specific statement ordering. Further, the
 technique ties custom element declaration too strongly to script. Even
 at the earliest stages, the developers soundly demanded the ability to
 separate ALL the script into a single, separate file.

 The next solution was to invent another quantum of time, where

 1) declaration and
 2) prototype-building come together at
 3) some point of registration.

 Unfortunately, this further exacerbates problem B: since (3) occurs
 neither at (1) or (2), but rather at some point in the future, it
 becomes increasingly more difficult to reason about why a dependency
 failed.

 Goram! Don't even get me started on problem B. By far, the easiest
 solution here would have been to make HTML Imports block on loading,
 like scripts. Unlucky for us, the non-blocking behavior is one of the
 main benefits that HTML Imports bring to the table. From here, things
 de-escalate quickly. Spirits get broken and despair rules the land.

 As it stands, I have little choice but to make the following proposal:

 Let's let declarative custom element syntax rest for a while. Let's
 yank it out of the spec. Perhaps later, when it eats more cereal and
 gathers its strength, it shall rise again. But not today.

 :DG

 [1]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-create-custom-element-declaration
 [2]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/imports/index.html
 [3]:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0152.html
 [4]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-last-completion-value





Re: element Needs A Beauty Nap

2013-08-13 Thread Daniel Buchner
Quick question: Will we still be able to land HTML Imports sans any Custom
Element reliance/support? I'd like to retain the ability to import HTML
files unbound from the parsing of element declarations. A significant
use-case is easily importing a bunch of template elements for use in the
main document - let's not throw the baby out with the bath water (maybe
this is already the thinking?)

- Daniel


On Tue, Aug 13, 2013 at 8:06 AM, Brian Kardell bkard...@gmail.com wrote:




 On Tue, Aug 13, 2013 at 9:15 AM, Daniel Buchner dan...@mozilla.comwrote:

 I concur. On hold doesn't mean forever, and the imperative API affords us
 nearly identical feature capability. Nailing the imperative and getting the
 APIs to market is far more important to developers at this point.
  On Aug 12, 2013 4:46 PM, Alex Russell slightly...@google.com wrote:

 As discussed face-to-face, I agree with this proposal. The declarative
 form isn't essential to the project of de-sugaring the platform and can be
 added later when we get agreement on what the right path forward is.
 Further, polymer-element is evidence that it's not even necessary so long
 as we continue to have the plumbing for loading content that is HTML
 Imports.

 +1


 On Mon, Aug 12, 2013 at 4:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 tl;dr: I am proposing to temporarily remove declarative custom element
 syntax (aka element) from the spec. It's broken/dysfunctional as
 spec'd and I can't see how to fix it in the short term.

 We tried. We gave it a good old college try. In the end, we couldn't
 come up with an element syntax that's both functional and feasible.

 A functional element would:

 1) Provide a way to declare new or extend existing HTML/SVG elements
 using markup
 2) Allow registering prototype/lifecycle callbacks, both inline and out
 3) Be powerful enough for developers to prefer it over document.register

 A feasible element would:

 1) Be intuitive to use
 2) Have simple syntax and API surface
 3) Avoid complex over-the-wire dependency resolution machinery

 You've all watched the Great Quest unfold over in public-webapps over
 the last few months.

 The two key problems that still remain unsolved in this quest are:

 A. How do we integrate the process of creating a custom element
 declaration [1] with the process of creating a prototype registering
 lifecycle callbacks?

 B. With HTML Imports [2], how do we ensure that the declaration of a
 custom element is loaded after the declaration of the custom element
 it extends? At the very least, how do we enable developers to reason
 about dependency failures?

 We thought we solved problem A first with the incredible this [3],
 and then with the last completion value [4], but early experiments
 are showing that this last completion value technique produces brittle
 constructs, since it forces specific statement ordering. Further, the
 technique ties custom element declaration too strongly to script. Even
 at the earliest stages, the developers soundly demanded the ability to
 separate ALL the script into a single, separate file.

 The next solution was to invent another quantum of time, where

 1) declaration and
 2) prototype-building come together at
 3) some point of registration.

 Unfortunately, this further exacerbates problem B: since (3) occurs
 neither at (1) or (2), but rather at some point in the future, it
 becomes increasingly more difficult to reason about why a dependency
 failed.

 Goram! Don't even get me started on problem B. By far, the easiest
 solution here would have been to make HTML Imports block on loading,
 like scripts. Unlucky for us, the non-blocking behavior is one of the
 main benefits that HTML Imports bring to the table. From here, things
 de-escalate quickly. Spirits get broken and despair rules the land.

 As it stands, I have little choice but to make the following proposal:

 Let's let declarative custom element syntax rest for a while. Let's
 yank it out of the spec. Perhaps later, when it eats more cereal and
 gathers its strength, it shall rise again. But not today.

 :DG

 [1]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-create-custom-element-declaration
 [2]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/imports/index.html
 [3]:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0152.html
 [4]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-last-completion-value




 +1 - this is my preferred route anyway.  Concepts like register and shadow
 dom are the core elements... Give projects like x-tags and polymer and even
 projects like Ember and Angular some room to help lead the charge on asking
 those questions and helping to offer potentially competing answers -- there
 need be no rush to standardize at the high level at this point IMO.

 --
 Brian Kardell :: @briankardell :: hitchjs.com



Re: element Needs A Beauty Nap

2013-08-13 Thread Daniel Buchner
Yep. HTML Imports are standing on their own.

Dimitri, did I ever tell you you're my hero? (#ProTip: to get the full
effect, sing it with that Bette Midler tone)


On Tue, Aug 13, 2013 at 10:02 AM, Dimitri Glazkov dglaz...@google.comwrote:

 On Tue, Aug 13, 2013 at 9:59 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  Quick question: Will we still be able to land HTML Imports sans any
 Custom
  Element reliance/support? I'd like to retain the ability to import HTML
  files unbound from the parsing of element declarations. A significant
  use-case is easily importing a bunch of template elements for use in
 the
  main document - let's not throw the baby out with the bath water (maybe
 this
  is already the thinking?)

 Yep. HTML Imports are standing on their own.

 :DG



Re: Web Widgets, Where Art Thou?

2013-07-29 Thread Daniel Buchner
FWIW, I ran a dev poll last week: ~95% of respondents preferred a simple,
separate HTML document specifically for their widget and use all the
existing DOM APIs and modules from across the web for things like storage,
localization, etc. In fact, of the only 2 respondents opposed to the idea,
one fundamentally misunderstood the widget concept and the other
accidentally selected the wrong option.  Here are some of their qualitative
responses:

 Why did you choose the option you did? Use a separate widget document
declared via your app manifest? I like the separation of files for the high
level conceptual mental model of my app. So, I tend to defer to the latter
suggestion/other option in the interest of performance. Yes separation of
concern, smaller foot print when only the widget is open; we can use a
shared background worker to make the full-site loading faster by reusing
the same core js thread anyway. Yes Separation of concerns. I would much
rather have a common set of library code, and two html files that use that
library code differently. Mixing two very different behaviors like a full
app and a widget into one html file sounds gross. Yes Inheritance of styles
and integration into presentation layer, maintenance. Likely not going to
work so easily via a shadow DOM, though styling has been available for some
composite elements. The second method also gives greater portability to
devices that don't yet support shadow-DOM implementations.

For sufficiently small 'widgets' that are nothing but a small API-able
event driven interface, option 1 is viable when legacy support isn't.

 This user misunderstood a few things, and seemed to believe that
widgets are somehow bound to Web Components*

No
I like to have standalone chunks of functionality.  This user selected
the wrong option, but I still counted it toward `No`*
No It's easier. And better for the user. Yes Testability and
maintenance YesIn my experience there is no universal cut and dry
answer to this question
(yet). Yes It's probably much more work to adapt the style of a full app to
turn it into a widget. And it will be possible to modify the full-app
without altering the widget. Yes

Given the miniscule level of adoption/use of the current widget scheme, and
the fact the proposed addition of a lighter declaration via the app
manifest wouldn't affect use of the old spec, I'm having trouble
understanding why this proposal is facing such stop energy.


On Tue, Jul 23, 2013 at 8:12 AM, Marcos Caceres w...@marcosc.com wrote:




 On Tuesday, July 23, 2013 at 3:06 PM, Daniel Buchner wrote:

 
  On Tue, Jul 23, 2013 at 4:44 AM, Marcos Caceres w...@marcosc.com(mailto:
 w...@marcosc.com) wrote:
  
   On Monday, July 22, 2013 at 11:54 PM, Daniel Buchner wrote:
  
To clarify, I am proposing that we make a simple app manifest entry
 the only requirement for widget declaration: widget: { launch_path: ... }.
 The issue with viewMode (https://github.com/w3c/manifest/issues/22), is
 that it hard-binds a widget's launch URL to the app's launch URL, forever
 tying the widget to the full app. I'd like to give devs the option to
 provide a completely different launch path for widgets, if they choose. As
 for the finer points of widget declaration, why force developers to
 generate two separate files that regurgitate much of the same information?
 -- this looks a lot more complex than adding a launch path to an existing
 file and a few media queries for styling:
 http://www.w3.org/TR/2009/WD-widgets-reqs-20090430/
  
  
   The thing here is that you are adding an explicit role or type for
 an application (by declaring widget:). We view the representation of the
 application as being more responsive than that through a view mode. What
 you propose explicitly encourages two different applications to be created,
 instead of a single one that adapts to the context in which it's being
 displayed. Again, take a look at how Opera's speed dial was making use of
 view modes.
  
   However, I can see how splitting the two application classes into
 different resources can be appealing at first… but it goes against the
 principles of responsive design: it would be like telling people to build a
 separate desktop site and mobile site, instead of responsibly adapting the
 content to fit the layout. Using view mode media queries works naturally
 with the way modern web applications are designed today - they fit
 naturally into just being another break point.
 
  In theory, responsive design seems like the clear answer for widgets - I
 love making my apps responsive so they work for phone/tablet/desktop etc.
 However, there are issues with the widget case. In phone/tablet/desktop
 views, apps still use most or all of an app's data/features, the major
 differences tend to be in DOM arrangement and style of the content they
 present. It's easy to think of widgets as a logical next level to that -
 but that can devolve into the impractical quickly.
 I guess here we would need

Re: Web Widgets, Where Art Thou?

2013-07-23 Thread Daniel Buchner
On Tue, Jul 23, 2013 at 4:44 AM, Marcos Caceres w...@marcosc.com wrote:


 On Monday, July 22, 2013 at 11:54 PM, Daniel Buchner wrote:

  To clarify, I am proposing that we make a simple app manifest entry the
 only requirement for widget declaration: widget: { launch_path: ... }. The
 issue with viewMode (https://github.com/w3c/manifest/issues/22), is that
 it hard-binds a widget's launch URL to the app's launch URL, forever tying
 the widget to the full app. I'd like to give devs the option to provide a
 completely different launch path for widgets, if they choose. As for the
 finer points of widget declaration, why force developers to generate two
 separate files that regurgitate much of the same information? -- this
 looks a lot more complex than adding a launch path to an existing file and
 a few media queries for styling:
 http://www.w3.org/TR/2009/WD-widgets-reqs-20090430/


 The thing here is that you are adding an explicit role or type for an
 application (by declaring widget:). We view the representation of the
 application as being more responsive than that through a view mode. What
 you propose explicitly encourages two different applications to be created,
 instead of a single one that adapts to the context in which it's being
 displayed. Again, take a look at how Opera's speed dial was making use of
 view modes.

 However, I can see how splitting the two application classes into
 different resources can be appealing at first… but it goes against the
 principles of responsive design: it would be like telling people to build a
 separate desktop site and mobile site, instead of responsibly adapting the
 content to fit the layout. Using view mode media queries works naturally
 with the way modern web applications are designed today - they fit
 naturally into just being another break point.


In theory, responsive design seems like the clear answer for widgets - I
love making my apps responsive so they work for phone/tablet/desktop etc.
However, there are issues with the widget case. In phone/tablet/desktop
views, apps still use most or all of an app's data/features, the major
differences tend to be in DOM arrangement and style of the content they
present. It's easy to think of widgets as a logical next level to that -
but that can devolve into the impractical quickly. Widgets are (generally)
by nature, small presentations of a sliver of app content displayed in a
minimalist fashion that update a user to the state of something
at-a-glance. Also, many widgets can be in active view at once - consider a
home screen with nine 1x1 widgets. If each widget was itself the full app,
with the addition of responsive styles, it would be a recipe for crippling
performance. Imagine nine Gmail tabs on a screen, nine Facebooks, etc.

The obvious retort will be: well devs are smart, they'll proactively
structure their app, have tiered checks for assets, and place conditions
throughout their code to ensure they are catching the *very different*
widget use-case everywhere in their app -- I'd assert that this is not
practical, and in many cases, devs would welcome a clean document they knew
was intended specifically for their widget.

For non-adaptive designs, maybe. But we start running into the No I don't
want to download your !@#$%^* app UI issue pretty fast, so this should be
done with a bit of thinking if the goal isn't to annoy users more than help
them (yes clippy, I am looking at you).

This comment is in accurate for a few reasons:

   - We know consumers no longer distinguish (in general) between apps and
   widgets. Heck, on Android you even find them in the same category on the
   Play store - no one cares
   - If users see apps as widgets, widgets as apps, and some as both, why
   fight it? Other ecosystems proved that these two variants of content (at
   least) can be declared in one vehicle - this is not
   a groundbreaking concept.
   - This isn't the same as annoying sites that nag you about downloading
   their app - that's a poor comparison
   - Who says an app package needs to contain a base launch_path? Could we
   not modify the app spec to state that if the main app launch_path is
   omitted, but another type of launch_path is present, that the UA installs
   only the specified functionality? (a widget in this case)



while taking advantage of natural synergies that result from reusing
a common base.
  
  
   I think I agree, but can we be explicit about the things we think
 we're getting?
 
  Many mobile apps include widgets, and users are beginning to think:
 Hey, I wonder if this app has a widget?; some users even seek out apps
 that provide a companion widget in addition to the app itself. Using an app
 manifest for packaging is easier on developers, and aligns with user
 perception of the output.
 Sure, that might be fine. In the case of W3C Widgets, one simply said:

 widget viewmodes = floating/

 Which indicated to the UA that floating mode was supported (i.e., you
 can

Re: Web Widgets, Where Art Thou?

2013-07-22 Thread Daniel Buchner
In my opinion, the current spec's complexity in relation to its feature
goal, is high. This doesn't mean it was a bad spec or deficient, it could
be due to a number of factors: different assumptions about what widgets
should do, packaging choices that existed before web apps gained steam, or
a different focus on where and how widgets would be displayed.

I'd like to step back and formulate a strategy based on the progression and
growing prevalence of widgets on native mobile platforms, as well as how we
can align widgets with the packaging and distribution of web apps. The
paradigm of declaring/packaging widgets inside app packages is a beneficial
pairing that reduces the amount of code and management developers are
forced to think about, while taking advantage of natural synergies that
result from reusing a common base. I see widgets as a web page (perhaps the
same page as a full app, if the dev chooses) with near-zero additional,
cognitive overhead. Declaration via the app manifest is a huge piece - I'd
argue that this alone would realize a huge increase in developer
utilization.

Let's look at current consumer perception of widgets, and how the space is
evolving:

   - Widgets are commonplace on mobile platforms
   - In the view of consumers, search, discovery, and purchase/installation
   of widgets is now distinguishable from apps
   - Widgets remain a feature used primarily by savvy users, but new
   presentations will blur the lines between what a widget is - let's get
   ahead of this!

The last point here is important for the next generation of 'widgets'.
Widgets in the statically-present-on-a-home-screen incarnation are
currently the most recognizable form, but this is changing. Look at Google
Now - those cards, they're just widgets IMO.

Conclusions:

   - If users no longer distinguish between apps and widgets in practice, a
   common packaging vehicle makes sense.
   - We are seeing proactive, contextual presentation of data in
   widget-esque forms on Android and iOS (Moto X looks like it will introduce
   more of this). Under the proposed direction of widgets, developers need
   only know what context their widget is being evoked in, and how best to
   layout a UI for the intended display. (media queries for something like
   blocks - just an example, you get the idea)
   - I believe we should retool efforts in this area to focus on sharing
   the app packaging vehicle and reducing complexity to near-zero (besides
   things like widget-specific media queries)
   - If we want to make a splash, actively encourage implementers to add
   features based on widgets or widgets + apps (this is a feature we've
   discussed for future enhancement of the Firefox desktop experience)

I'd like to hear any additional feedback, and your thoughts on next steps.


Re: Web Widgets, Where Art Thou?

2013-07-22 Thread Daniel Buchner
On Mon, Jul 22, 2013 at 11:55 PM, Charles McCathie Nevile 
cha...@yandex-team.ru wrote:

 On Mon, 22 Jul 2013 09:59:33 -0700, Daniel Buchner dan...@mozilla.com
 wrote:

  In my opinion, the current spec's complexity in relation to its feature
 goal, is high.


 I think it is pretty important to this discussion to understand what parts
 of the widget framework you think bring complexity (or alternatively, bring
 little value).

 Different people interpret complexity very differently. For example there
 are many developers today who find JSON extremely comfortable and flexible.
 But others find it extremely limited (no common internationalisation
 mechanism, the strictness of the syntax is almost invisible being expressed
 only in tiny punctuation marks, no clear comment mechanism, ...)

 Without a clear idea of what you mean by complexity (or clarity) it is
 very hard to understand what your statement means, and therefore what
 should be changed...


To clarify, I am proposing that we make a simple app manifest entry the
only requirement for widget declaration:  *widget: { launch_path: ... }*.
The issue with *viewMode *(https://github.com/w3c/manifest/issues/22), is
that it hard-binds a widget's launch URL to the app's launch URL, forever
tying the widget to the full app. I'd like to give devs the option to
provide a completely different launch path for widgets, if they choose. As
for the finer points of widget declaration, why force developers to
generate two separate files that regurgitate much of the same information?
-- this looks a lot more complex than adding a launch path to an existing
file and a few media queries for styling:
http://www.w3.org/TR/2009/WD-widgets-reqs-20090430/


 while taking advantage of natural synergies that result from reusing
 a common base.


 I think I agree, but can we be explicit about the things we think we're
 getting?


Many mobile apps include widgets, and users are beginning to think: Hey, I
wonder if this app has a widget?; some users even seek out apps that
provide a companion widget in addition to the app itself. Using an app
manifest for packaging is easier on developers, and aligns with user
perception of the output.


 For example, many browsers now use some form of JSON to write a manifest
 that (other than the syntax) is almost identical to the XML packaging used
 for widgets. And as JC noted there are actually numerous systems using the
 XML Packaging and Configuration.

  I see widgets as a web page (perhaps the same page as a full app,if the
 dev chooses) with near-zero additional, cognitive overhead.


 I'm afraid I have no idea what this means in practice.


I was referring to the choice (detailed above) a developer has to use the
app launch path, or an different URL, as their widget path, and to do so
via a common declaration vehicle. That simplifies the declaration side. On
the code side, I would stay away from adding any mechanisms (other than
display helpers, like widget-specific media queries) to the widget context
- the message: just use the same APIs you would use for any other app/page.
With today's web, are these APIs really necessary? --
http://www.w3.org/TR/widgets-apis/


 Declaration via the app manifest is a huge piece - I'd argue that
 this alone would realize a huge increase in developer utilization.


I suspect this is a very common perception.

  Let's look at current consumer perception of widgets, and how the space
 is evolving:

- Widgets are commonplace on mobile platforms
- In the view of consumers, search, discovery, and purchase/

  installation of widgets is now distinguishable from apps

 Do you mean indistinguishable?


I did, sorry about that :)

- I believe we should retool efforts in this area to focus on sharing
 the app packaging vehicle and reducing complexity to near-zero  (besides
 things like widget-specific media queries)


 This is where it is critical to know what you think is near-zero
 complexity. Having seen a couple of systems deployed based on JSON
 packaging (Google and Mozilla), and a bunch of them based on the current
 XML Widget Packaging, I personally find the latter far less complex. But I
 realise not everyone thinks like I do.


If you asked the average client-side web developer, who lives primarily in
HTML, CSS, and JS, I would bet a trillion dollar coin that the *overwhelming
* majority prefer JSON for description, packaging, and data transmission -
consider: web app manifests, NPM, Bower, and nearly every client-consumed
web-service API launched in the last 4 years.


Web Widgets, Where Art Thou?

2013-07-19 Thread Daniel Buchner
As some of you are aware, a widget spec or two (
http://www.w3.org/TR/2012/PR-widgets-apis-20120522/) have been floating
around for a while. These were never widely adopted for various reasons -
not the least of which was their complexity.

Well, hold on to your shorts folks: I would like to rekindle the idea of
web widgets, but with an eye toward simplicity that builds on open web app
concepts and mechanism.

My proposal is simple:

Widgets are just an alternate (or even the same) app 'launch_path' a
developer would declared under a 'widget' key in their existing App
Manifest. The UA would launch this URL in whatever widget UI containers it
creates, for example: squares on a New Tab, a floating panel, etc., and add
a few things to the document context - namely: an imperative means for
detecting the document is being displayed in a widget state, and a new
media query type 'widget' for styling (especially helpful if the developers
chooses to use a single origin for their app and widget)

What this allows for:

- Let's us utilize the existing declaration and installation mechanisms for
web apps (which is the same place widgets are already declared in today's
common native app packages)

- Provides a great new variant of content for all UAs who already are
implementing apps

- Delivers huge user benefit at a relatively low cost

Stupid-Simple Web Widgets: great idea, or greatest idea?...I'm gonna put
you down for great.

-

*PS - If the word 'widget' makes you feel dirty and sad-faced (which it
shouldn't, as Android proved and iOS concurred), let's just imagine we're
talking about the W3 Web Dingus spec for now and focus on the user value
proposition ;) *


Re: [webcomponents]: element Wars: A New Hope

2013-04-17 Thread Daniel Buchner
*This is just a repackaging of Object.defineProperties( target,
PropertyDescriptors ) thats slightly less obvious because the target
appears to be a string.
*
Is another difference that the 'x-foo' doesn't have to be 'known' yet? It
seems to be a bit more than a repack of Object.defineProperties to me.


On Wed, Apr 17, 2013 at 3:53 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:16 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Inspired by Allen's and Scott's ideas in the Benadryl thread, I dug
 into understanding what element actually represents.

 It seems that the problem arises when we attempt to make element
 _be_ the document.register invocation, since that draws the line of
 when the declaration comes to existence (the registration line) and
 imposes overly restrictive constraints on what we can do with it.

 What if instead, the mental model of element was a statement of
 intent? In other words, it says: Hey browser-thing, when the time is
 right, go ahead and register this custom element. kthxbai

 In this model, the proverbial registration line isn't drawn until
 later (more on that in a moment), which means that both element and
 script can contribute to defining the same custom element.

 With that in mind, we take Scott's/Allen's excellent idea and twist it
 up a bit. We invent a HTMLElementElement.define method (name TBD),
 which takes two arguments: a custom element name, and an object. I
 know folks will cringe, but I am thinking of an Object.create
 properties object:


 The are called Property Descriptors.




 HTMLElementElement.define('x-foo', {
 erhmahgerd: { writable: false, value: BOOKS! }
 });


 This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.


 Rick





 When the registration line comes, the browser-thing matches element
 instances and supplied property objects by custom element names, uses
 them to create prototypes, and then calls document.register with
 respective custom element name and prototype as arguments.

 We now have a working declarative syntax that doesn't hack script,
 is ES6-module-friendly, and still lets Scott build his tacos. Sounds
 like a win to me. I wonder how Object.create properties object and
 Class syntax could mesh better. I am sure ES6 Classes peeps will have
 ideas here.

 So... When is the registration line? Clearly, by the time the parser
 finishes with the document, we're too late.

 We have several choices. We could draw the line for an element when
 its corresponding /element is seen in document. This is not going to
 work for deferred scripts, but maybe that is ok.

 For elements that are imported, we have a nice delineation, since we
 explicitly process each import in order, so no problems there.

 What do you think?

 :DG





Re: [webcomponents]: element Wars: A New Hope

2013-04-17 Thread Daniel Buchner
So let me be *crystal clear*:

If define() internally does this -- When the registration line comes, the
browser-thing matches element instances and supplied property objects by
custom element names, uses them to create prototypes, and then calls
document.register with respective custom element name and prototype as
arguments. - it's doing a hell-of-a-lot more than simply redirecting to
Object.create - in fact, I was thinking it would need to do this:

   - Retain all tagName-keyed property descriptors passed to it on a common
   look-up object
   - Interact with the portion of the system that handles assessment of the
   registration line, and whether it has been crossed
   - and if called sometime after the registration line has been crossed,
   immediately invokes code that upgrades all in-DOM elements matching the
   tagName provided

I could be mistaken - but my interest is valid, because if true I would
need to polyfill the above detailed items, vs writing something as simple
and derpish as: HTMLElementElement.prototype.define = ...alias to
Object.create...

Dimitri, Scott can you let me know if that sounds right, for polyfill sake?

On Wed, Apr 17, 2013 at 4:11 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 *This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.
 *
 Is another difference that the 'x-foo' doesn't have to be 'known' yet? It
 seems to be a bit more than a repack of Object.defineProperties to me.


 I'm sorry if I was unclear, but my comments weren't subjective, nor was I
 looking for feedback.

 Looks like Dimitri agrees:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0306.html

 Rick





 On Wed, Apr 17, 2013 at 3:53 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:16 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Inspired by Allen's and Scott's ideas in the Benadryl thread, I dug
 into understanding what element actually represents.

 It seems that the problem arises when we attempt to make element
 _be_ the document.register invocation, since that draws the line of
 when the declaration comes to existence (the registration line) and
 imposes overly restrictive constraints on what we can do with it.

 What if instead, the mental model of element was a statement of
 intent? In other words, it says: Hey browser-thing, when the time is
 right, go ahead and register this custom element. kthxbai

 In this model, the proverbial registration line isn't drawn until
 later (more on that in a moment), which means that both element and
 script can contribute to defining the same custom element.

 With that in mind, we take Scott's/Allen's excellent idea and twist it
 up a bit. We invent a HTMLElementElement.define method (name TBD),
 which takes two arguments: a custom element name, and an object. I
 know folks will cringe, but I am thinking of an Object.create
 properties object:


 The are called Property Descriptors.




 HTMLElementElement.define('x-foo', {
 erhmahgerd: { writable: false, value: BOOKS! }
 });


 This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.


 Rick





 When the registration line comes, the browser-thing matches element
 instances and supplied property objects by custom element names, uses
 them to create prototypes, and then calls document.register with
 respective custom element name and prototype as arguments.

 We now have a working declarative syntax that doesn't hack script,
 is ES6-module-friendly, and still lets Scott build his tacos. Sounds
 like a win to me. I wonder how Object.create properties object and
 Class syntax could mesh better. I am sure ES6 Classes peeps will have
 ideas here.

 So... When is the registration line? Clearly, by the time the parser
 finishes with the document, we're too late.

 We have several choices. We could draw the line for an element when
 its corresponding /element is seen in document. This is not going to
 work for deferred scripts, but maybe that is ok.

 For elements that are imported, we have a nice delineation, since we
 explicitly process each import in order, so no problems there.

 What do you think?

 :DG







Re: [webcomponents]: element Wars: A New Hope

2013-04-17 Thread Daniel Buchner
Thanks for the confirmation Scott. This question was never about
nomenclature - it means folks looking to polyfill define(), need to 'save'
the tagName/object associations internally for the third quantum of time,
at which point 'upgrade' code leverages it to make shiny 'x-foo' elements.

Snark-free confirmation/collaboration FTW!


On Wed, Apr 17, 2013 at 4:49 PM, Scott Miles sjmi...@google.com wrote:

 The key concept is that, to avoid timing issues, neither processing
 element nor evaluating script[function-to-be-named-later]/script are
 the terminal point for defining an element.

 Rather, at some third quantum of time a combination of those things is
 constructed, keyed on 'element name'.

 Most of the rest is syntax, subject to bikeshedding when and if the main
 idea has taken root.


 On Wed, Apr 17, 2013 at 4:33 PM, Daniel Buchner dan...@mozilla.comwrote:

 So let me be *crystal clear*:

 If define() internally does this -- When the registration line comes,
 the browser-thing matches element instances and supplied property objects
 by custom element names, uses them to create prototypes, and then calls
 document.register with respective custom element name and prototype as
 arguments. - it's doing a hell-of-a-lot more than simply redirecting to
 Object.create - in fact, I was thinking it would need to do this:

- Retain all tagName-keyed property descriptors passed to it on a
common look-up object
- Interact with the portion of the system that handles assessment of
the registration line, and whether it has been crossed
- and if called sometime after the registration line has been
crossed, immediately invokes code that upgrades all in-DOM elements
matching the tagName provided

 I could be mistaken - but my interest is valid, because if true I would
 need to polyfill the above detailed items, vs writing something as simple
 and derpish as: HTMLElementElement.prototype.define = ...alias to
 Object.create...

 Dimitri, Scott can you let me know if that sounds right, for polyfill
 sake?

 On Wed, Apr 17, 2013 at 4:11 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 *This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.
 *
 Is another difference that the 'x-foo' doesn't have to be 'known' yet?
 It seems to be a bit more than a repack of Object.defineProperties to me.


 I'm sorry if I was unclear, but my comments weren't subjective, nor was
 I looking for feedback.

 Looks like Dimitri agrees:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0306.html

 Rick





 On Wed, Apr 17, 2013 at 3:53 PM, Rick Waldron 
 waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:16 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 Inspired by Allen's and Scott's ideas in the Benadryl thread, I dug
 into understanding what element actually represents.

 It seems that the problem arises when we attempt to make element
 _be_ the document.register invocation, since that draws the line of
 when the declaration comes to existence (the registration line) and
 imposes overly restrictive constraints on what we can do with it.

 What if instead, the mental model of element was a statement of
 intent? In other words, it says: Hey browser-thing, when the time is
 right, go ahead and register this custom element. kthxbai

 In this model, the proverbial registration line isn't drawn until
 later (more on that in a moment), which means that both element and
 script can contribute to defining the same custom element.

 With that in mind, we take Scott's/Allen's excellent idea and twist it
 up a bit. We invent a HTMLElementElement.define method (name TBD),
 which takes two arguments: a custom element name, and an object. I
 know folks will cringe, but I am thinking of an Object.create
 properties object:


 The are called Property Descriptors.




 HTMLElementElement.define('x-foo', {
 erhmahgerd: { writable: false, value: BOOKS! }
 });


 This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.


 Rick





 When the registration line comes, the browser-thing matches element
 instances and supplied property objects by custom element names, uses
 them to create prototypes, and then calls document.register with
 respective custom element name and prototype as arguments.

 We now have a working declarative syntax that doesn't hack script,
 is ES6-module-friendly, and still lets Scott build his tacos. Sounds
 like a win to me. I wonder how Object.create properties object and
 Class syntax could mesh better. I am sure ES6 Classes peeps will have
 ideas here.

 So... When is the registration line? Clearly, by the time the parser
 finishes with the document, we're too late.

 We have several choices. We could

Re: [webcomponents]: element Wars: A New Hope

2013-04-17 Thread Daniel Buchner
@Rick *I didn't say Object.create, I said Object.defineProperties* -
Object.create
was a typo I didn't intend, thank you for catching that, *you're very
helpful*. You misinterpreted what I was alluding to with the
HTMLElementElement stuff, but that's not important now.

*I think you should get your Object meta APIs straight before having
discussions about them* - as it happens, I'm well acquainted with the
now-popular native methods and abstractions making their way through TC39
and the JS community. When we developed many of them for MooTools more than
half a decade ago, we always knew the rest of the web would *jump on the
bandwagon* with more advanced JS methods, abstractions, and native
extensions, *eventually*.

I must apologize, it wasn't immediately obvious to me that a nondescript,
passing mention the word repackaging, was actually an brilliant inference
to an extremely specific list of things implementers and polyfillers would
need to do beyond simply defining a property object - *my bad*. I really
just wanted to make sure I had that list of things right. Alas, I can't
read one word like repackaging, and use the Force to decrypt such
strikingly specific detail from it - but I guess that's what separates us
less-intelligent simpletons from Jedi like yourself.

Have a wonderful evening Rick.


On Wed, Apr 17, 2013 at 8:46 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 7:33 PM, Daniel Buchner dan...@mozilla.comwrote:

 So let me be *crystal clear*:

 If define() internally does this -- When the registration line comes,
 the browser-thing matches element instances and supplied property objects
 by custom element names, uses them to create prototypes, and then calls
 document.register with respective custom element name and prototype as
 arguments. - it's doing a hell-of-a-lot more than simply redirecting to
 Object.create - in fact, I was thinking it would need to do this:


 I didn't say Object.create, I said Object.defineProperties. I think you
 should get your Object meta APIs straight before having discussions about
 them.



- Retain all tagName-keyed property descriptors passed to it on a
common look-up object
- Interact with the portion of the system that handles assessment of
the registration line, and whether it has been crossed
- and if called sometime after the registration line has been
crossed, immediately invokes code that upgrades all in-DOM elements
matching the tagName provided

 I could be mistaken - but my interest is valid, because if true I would
 need to polyfill the above detailed items, vs writing something as simple
 and derpish as: HTMLElementElement.prototype.define = ...alias to
 Object.create...

 This appears to move the previously static HTMLFooElement.define( tag,
 descriptor ) method and now pollutes the prototype with a method of the
 same name? Why? Do you actually want every instance of HTMLFooElement to be
 able to call a define() method over and over again? I assumed this was a
 once-per-element operation...

 At any rate, of course it's not as simple as an alias assignment and no
 one suggested that—perhaps you're not familiar with the definition of the
 word repackage? It means to alter or remake, usually to make something
 more appealing. Sounds about right, doesn't it? I think you should be
 careful of what you refer to as derpish.



 Rick



  Dimitri, Scott can you let me know if that sounds right, for polyfill
 sake?

 On Wed, Apr 17, 2013 at 4:11 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 *This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.
 *
 Is another difference that the 'x-foo' doesn't have to be 'known' yet?
 It seems to be a bit more than a repack of Object.defineProperties to me.


 I'm sorry if I was unclear, but my comments weren't subjective, nor was
 I looking for feedback.

 Looks like Dimitri agrees:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0306.html

 Rick





 On Wed, Apr 17, 2013 at 3:53 PM, Rick Waldron 
 waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:16 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 Inspired by Allen's and Scott's ideas in the Benadryl thread, I dug
 into understanding what element actually represents.

 It seems that the problem arises when we attempt to make element
 _be_ the document.register invocation, since that draws the line of
 when the declaration comes to existence (the registration line) and
 imposes overly restrictive constraints on what we can do with it.

 What if instead, the mental model of element was a statement of
 intent? In other words, it says: Hey browser-thing, when the time is
 right, go ahead and register this custom element. kthxbai

 In this model, the proverbial registration line isn't drawn until
 later (more

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-16 Thread Daniel Buchner
*I am going to offer a cop-out option: maybe we simply don't offer
imperative syntax as part of the spec?
*
Why would we do this if the imperative syntax is solid, nicely
compatible, and relatively uncontentious? Did you mean to say declarative?

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Tue, Apr 16, 2013 at 2:56 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Wow. What a thread. I look away for a day, and this magic beanstalk is all
 the way to the clouds.

 I am happy to see that all newcomers are now up to speed. I am heartened
 to recognize the same WTFs and grumbling that we went through along the
 path. I feel your pain -- I've been there myself. As Hixie once told me
 (paraphrasing, can't remember exact words), All the good choices have been
 made. We're only left with terrible ones.

 I could be wrong (please correct me), but we didn't birth any new ideas so
 far, now that everyone has caught up with the constraints.

 The good news is that the imperative syntax is solid. It's nicely
 compatible with ES6, ES3/5, and can be even used to make built-in HTML
 elements (modulo security/isolation problem, which we shouldn't tackle
 here).

 I am going to offer a cop-out option: maybe we simply don't offer
 imperative syntax as part of the spec? Should we let libraries/frameworks
 build their own custom elements (with opinion and flair) to implement
 declarative syntax systems?

 :DG



Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-16 Thread Daniel Buchner
One thing I've heard from many of our in-house developers, is that they
prefer the imperative syntax, with one caveat: we provide an easy way to
allow components import/require/rely-upon other components. This could
obviously be done using ES6 Modules, but is there anything we can do to
address that use case for the web of today?


On Tue, Apr 16, 2013 at 3:02 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Tue, Apr 16, 2013 at 3:00 PM, Daniel Buchner dan...@mozilla.com
 wrote:
  I am going to offer a cop-out option: maybe we simply don't offer
  imperative syntax as part of the spec?
 
  Why would we do this if the imperative syntax is solid, nicely
  compatible, and relatively uncontentious? Did you mean to say
 declarative?

 DERP. Yes, thank you Daniel. I mean to say:

 I am going to offer a cop-out option: maybe we simply don't offer
 DECLARATIVE syntax as part of the spec? Should we let
 libraries/frameworks build their own custom elements (with opinion and
 flair) to implement declarative syntax systems?


 :DG



Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-16 Thread Daniel Buchner
*Deferring just the script features of element would help with the
timing and probably allow a better long term solution to be designed.*

If the callbacks are not mutable or become inert after registration (as I
believe was the case), how would a developer do this -- *Imperative code
could presumably make that association, if it needed to.*

On Tue, Apr 16, 2013 at 3:47 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:


 On Apr 16, 2013, at 3:13 PM, Dimitri Glazkov wrote:

  On Tue, Apr 16, 2013 at 3:07 PM, Daniel Buchner dan...@mozilla.com
 wrote:
  One thing I've heard from many of our in-house developers, is that they
  prefer the imperative syntax, with one caveat: we provide an easy way to
  allow components import/require/rely-upon other components. This could
  obviously be done using ES6 Modules, but is there anything we can do to
  address that use case for the web of today?
 
  Yes, one key ability we lose here is the declarative quality -- with
  the declarative syntax, you don't have to run script in order to
  comprehend what custom elements could be used by a document.


 My sense is that the issues of concern (at least on this thread) with
 declaratively defining custom elements all related to how custom behavior
 (ie, script stuff) is declaratively associated. I'm not aware (but also not
 very familiar) with similar issues relating to template and other
 possible element subelement.  I also imagine that there is probably a set
 of use cases that don't actually need any custom behavior.

 That suggests to me, that a possible middle ground, for now,  is to  still
 have declarative custom element definitions but don't provide any
 declarative mechanism for associating script with them.  Imperative code
 could presumably make that association, if it needed to.

 I've been primarily concerned about approaches that would be future
 hostile toward the use of applicable ES features that are emerging.  I
 think we'll be see those features in browsers within the next 12 months.
 Deferring just the script features of element would help with the timing
 and probably allow a better long term solution to be designed.

 Allen


Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Daniel Buchner
*At least somebody explain why this is conceptually wrong.

*
Nothing is conceptually wrong with what you've stated Scott. I could live
with a prototype element that is scoped to the element as long as there
was some sort of 'registeredCallback' (as you previously alluded to many
posts back) that would be executed with access to the global scope just
once before the 'DOMElementsUpgraded' event fired. This preserves the
ability to setup, with minimal effort and hackery, delegate listeners and
other useful top-level scaffolding a component may require. Is this pretty
or optimal in terms of developer ergonomics? No - but it meet the
requirements.

For what it's worth: I talked to Brendan about this quandary and he favored
the creation of a special tag specifically for defining a component's
interface.


On Mon, Apr 15, 2013 at 12:46 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Mon, Apr 15, 2013 at 11:59 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/15/13 10:45 AM, Rick Waldron wrote:

 Sorry, I should've been more specific. What I meant was that:

 new HTMLButtonElement();

 Doesn't construct an HTMLButtonElement, it throws with an illegal
 constructor in Chrome and HTMLButtonElement is not a constructor in
 Firefox (I'm sure this is the same across other browsers)


 Oh, I see.  That's not anything inherent, for what it's worth; making
 this particular case work would be 10 lines of code.  Less on a
 per-element basis if we want to do this for most elements.


  function Smile() {
HTMLButtonElement.call(this);
this.textContent = :);
 }

 Smile.prototype = Object.create(**HTMLButtonElement.prototype);


 Ah, so... This would not work even if new HTMLButtonElement worked,
 right?


 I guess I assumed this would work if new HTMLButtonElement() could
 construct things


 In particular, if HTMLButtonElement were actually something that could
 construct things in Gecko, it would still ignore its argument when called
 and always creates a new object.  You can see the behavior with something
 like XMLHttpRequest if you want.


 What I was expecting the above to produce is a constructor capable of
 something like this:

   var smile = new Smile();
   smile.nodeType === 1;
   smile.outerHTML === button:)/button; // true
   // (and so forth)
   document.body.appendChild(smile);

 results in something like this: http://gul.ly/de0


 Rick




  Hopefully that clarifies?


 Somewhat.  Trying to understand what things we really need to support
 here and in what ways, long-term...

 -Boris





Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Daniel Buchner
*How, as component author, do I ensure that my imperative set up code runs
and modifies my element DOM content before the user sees the un-modified
custom element declared in mark-up? (I'm cheating, since this issue isn't
specific to your prototype) *

When you say ...runs before the user sees the unmodified..., what do you
mean? The 'readyCallback' is the *run-once-per-instance* imperative code
associated with a component, and would always run *after *something like
'registeredCallback*', *the *run-once-per-component* imperative
code...unless we are talking about different things? As far as what the
user visually sees, there is an understanding that a render my happen
wherein components are not yet inflated/upgraded - in this instance, the
developer should create some styles to account for the non-upgraded state.
This has been the thinking all along btw.


On Mon, Apr 15, 2013 at 1:46 PM, John J Barton
johnjbar...@johnjbarton.comwrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code runs
 and modifies my element DOM content before the user sees the un-modified
 custom element declared in mark-up? (I'm cheating, since this issue isn't
 specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.com wrote:

 Sorry for beating this horse, because I don't like 'prototype' element
 anymore than anybody else, but I can't help thinking if there was a way to
 express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can run
 init code directly after parsing, there are no 'this' issues or problems
 associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.com wrote:

   1) call 'init' when component instance tag is encountered, blocking
 parsing,

 Fwiw, it was said that calling user code from inside the Parser could
 cause Armageddon, not just block the parser. I don't recall the details,
 unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered, blocking
 parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So non-starter is just a vote against one of three Bad choices as far
 as I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on
 their own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags or link
 rel=import just like the main document.


 Indeed, however the relative order of the component's script tag
 processing and the component's tag element is all I was talking about.




 On Mon, Apr 15, 2013 at 11:23 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 10:38 AM, Scott Miles sjmi...@google.comwrote:

 Dimitri is trying to avoid 'block[ing] instance construction'
 because instances can be in the main document markup.


 Yes we sure hope so!



 The main document can have a bunch of markup for custom elements. If
 the user has made element definitions a-priori to parsing that markup
 (including inside link rel='import'), he expects those nodes to be 
 'born'
 correctly.


 Sure.




 Sidebar: running user's instance code while the parser is
 constructing the tree is Bad(tm) so we already have deferred init code
 until immediately after the parsing step. This is why I keep saying
 'ready-time' is different from 'construct-time'.


 ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?



 Today, I don't see how we can construct a custom element with the
 right prototype at parse-time without blocking on imported scripts 
 (which
 is another side-effect of using script execution for defining prototype,
 btw.)


 You must block creating instances of components until component
 documents are parsed and 

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Daniel Buchner
*
*
*Gee, that's not very encouraging: this is the most important kind of issue
for a developer, more so than whether the API is inheritance-like or not.*

IMO, the not-yet-upgraded case is nothing new, and developers will hardly
be surprised. This nit is no different than if devs include a jQuery plugin
script at the bottom of the body that 'upgrades' various elements on the
page after render - basically, it's an unfortunate case of That's Just Life™


Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Mon, Apr 15, 2013 at 2:23 PM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Mon, Apr 15, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

  What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 IIRC it's not possible to override methods that will be called from
 inside of builtins, so I don't believe this is an issue (unless we change
 the playfield).


 Ugh. So we can override some methods but not others, depending on the
 implementation?

 So really these methods are more like callbacks with a funky kind of
 registration. It's not like inheriting and overriding, it's like onLoad
 implemented with an inheritance-like wording.  An API users doesn't think
 like an object, rather they ask the Internet some HowTo questions and get
 a recipe for a particular function override.

 Ok, I'm exaggerating, but I still think the emphasis on inheritance in the
 face of so me is a high tax on this problem.




  How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)

 This is another can of worms. Right now we blanket solve this by waiting
 for an 'all clear' event (also being discussed, 'DOMComponentsReady' or
 something) and handling this appropriately for our application.


 Gee, that's not very encouraging: this is the most important kind of issue
 for a developer, more so than whether the API is inheritance-like or not.





 On Mon, Apr 15, 2013 at 1:46 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.comwrote:

 Sorry for beating this horse, because I don't like 'prototype' element
 anymore than anybody else, but I can't help thinking if there was a way to
 express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can
 run init code directly after parsing, there are no 'this' issues or
 problems associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.comwrote:

   1) call 'init' when component instance tag is encountered, blocking
 parsing,

 Fwiw, it was said that calling user code from inside the Parser could
 cause Armageddon, not just block the parser. I don't recall the details,
 unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 
 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered,
 blocking parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So non-starter is just a vote against one of three Bad choices as
 far as I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on
 their own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags or link
 rel=import just like the main document.


 Indeed, however the relative order of the component's script tag

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Daniel Buchner
Would this be possible using the Scott and Brendan's idea of using a
specific element for prototyping? --
https://gist.github.com/csuwldcat/5392291


On Mon, Apr 15, 2013 at 3:33 PM, John J Barton
johnjbar...@johnjbarton.comwrote:

 I think that rendering a placeholder (eg blank image) then filling it in
 rather than blocking is good if done well (eg images with pre-allocated
 space). Otherwise it's bad but less bad than blocking ;-).

 But if you allow this implementation, then this whole discussion confuses
 me even more. I'm thinking: If you don't need the custom constructors
 during parsing, just wait for them to arrive, then call them. Something
 else is going on I suppose, so I'm just wasting your time.


 On Mon, Apr 15, 2013 at 2:42 PM, Daniel Buchner dan...@mozilla.comwrote:

 *
 *
 *Gee, that's not very encouraging: this is the most important kind of
 issue for a developer, more so than whether the API is inheritance-like or
 not.*

 IMO, the not-yet-upgraded case is nothing new, and developers will hardly
 be surprised. This nit is no different than if devs include a jQuery plugin
 script at the bottom of the body that 'upgrades' various elements on the
 page after render - basically, it's an unfortunate case of That's Just Life™


 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Apr 15, 2013 at 2:23 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

  What happens if the construction/initialization of the custom
 element calls one of the element's member functions overridden by code in a
 prototype?

 IIRC it's not possible to override methods that will be called from
 inside of builtins, so I don't believe this is an issue (unless we change
 the playfield).


 Ugh. So we can override some methods but not others, depending on the
 implementation?

 So really these methods are more like callbacks with a funky kind of
 registration. It's not like inheriting and overriding, it's like onLoad
 implemented with an inheritance-like wording.  An API users doesn't think
 like an object, rather they ask the Internet some HowTo questions and get
 a recipe for a particular function override.

 Ok, I'm exaggerating, but I still think the emphasis on inheritance in
 the face of so me is a high tax on this problem.




  How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)

 This is another can of worms. Right now we blanket solve this by
 waiting for an 'all clear' event (also being discussed,
 'DOMComponentsReady' or something) and handling this appropriately for our
 application.


 Gee, that's not very encouraging: this is the most important kind of
 issue for a developer, more so than whether the API is inheritance-like or
 not.





 On Mon, Apr 15, 2013 at 1:46 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.comwrote:

 Sorry for beating this horse, because I don't like 'prototype'
 element anymore than anybody else, but I can't help thinking if there 
 was a
 way to express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can
 run init code directly after parsing, there are no 'this' issues or
 problems associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.comwrote:

   1) call 'init' when component instance tag is encountered,
 blocking parsing,

 Fwiw, it was said that calling user code from inside the Parser
 could cause Armageddon, not just block the parser. I don't recall the
 details, unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles 
 sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 
 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Daniel Buchner
* Here are four ways to avoid the subclassing problem for custom elements
*
* 1)  Only allow instances of custome dom elements to be instantiated
using document.createElement(x-foo).
*
*
*
*Wearing web developer hat, I never make elements any other way than
createElement (or HTML), so this would be standard operating procedure, so
that's all good if we can get buy in.*

As long as the above supports all other DOM element creation vectors
(innerHTML, outerHTML, etc), then this is fine. Practically speaking, if it
so happened that custom elements could *never *be instantiated with
constructors, developers on the web today wouldn't shed a tear, they use
doc.createElement(), not constructors --
https://docs.google.com/forms/d/16cNqHRe-7CFRHRVcFo94U6tIYnohEpj7NZhY02ejiXQ/viewanalytics

-

* Alex Russell have been advocating that WebIDL should be allow
constructor-like interfaces*
*
*
*Absolutely agree. But these are horns of this dilemma.*
*
*
* #4 has been accepted for ES6 by all TC39 participants*
*
*
*Yes, I believe this is a timing issue. I am told it will be a long time
before #4 is practical.*

Yes, it will be along time, especially for IE9 and 10 (read: never), which
are support targets for custom element polyfills. Reliance on anything that
is optional or future should be avoided for the custom element base case.
Right now the polyfills for document.register(), and a few of the
declarative proposals, can give developers these awesome APIs today -
please, do not imperil this.


On Sun, Apr 14, 2013 at 12:22 PM, Scott Miles sjmi...@google.com wrote:

 errata: XFooPrototype = Object.create(HTMLElement.prototype, {


 On Sun, Apr 14, 2013 at 12:21 PM, Scott Miles sjmi...@google.com wrote:

  Alex Russell have been advocating that WebIDL should be allow
 constructor-like interfaces

 Absolutely agree. But these are horns of this dilemma.

  #4 has been accepted for ES6 by all TC39 participants

 Yes, I believe this is a timing issue. I am told it will be a long time
 before #4 is practical.

 Gecko and Blink have already landed forms of 'document.register', to wit:

 Grail-shaped version of document.register:

 // use whatever 'class-like' thing you want

 class XFoo extends HTMLElement {
   constructor() {
  super();
 this.textContent = XFoo Ftw;
   }
 }
 document.register('x-foo', XFoo);


 But since (today!) we cannot extend HTMLElement et al this way, the
 landed implementations use:

   // prototype only

 XFooPrototype = Object.create(HTMLElement, {

   readyCallback: {
 value: function() { // we invented this for constructor-like semantics

   super(); // some way of doing this

 }
   }

 };

 // capture the constructor if you care

 [XFoo =] document.register('x-foo', {prototype: XFooPrototype);


 Which for convenience, I like to write this way (but there are footguns):

 class XFooThunk extends HTMLElement {

   // any constructor here will be ignored

   readyCallback() { // we invented this for constructor-like semantics

 super();

   }

 }

 // capture the real constructor if you care

 [XFoo =] document.register('x-foo', XFooThunk);



 On Sun, Apr 14, 2013 at 12:07 PM, Allen Wirfs-Brock 
 al...@wirfs-brock.com wrote:


 On Apr 14, 2013, at 11:40 AM, Scott Miles wrote:

  Here are four ways to avoid the subclassing problem for custom
 elements
  1)  Only allow instances of custome dom elements to be instantiated
 using document.createElement(x-foo).

 Wearing web developer hat, I never make elements any other way than
 createElement (or HTML), so this would be standard operating procedure, so
 that's all good if we can get buy in.


 However, I believe that some people such as Alex Russell have been
 advocating that WebIDL should be allow constructor-like interfaces  to
 support expressions such as:
new HTMLWhateverElement()

 It would be future hostile to make that impossible, but support could
 reasonably wait for ES6 support.


  2, 3, 4

 I believe have been suggested in one form or another, but as I
 mentioned, were determined to be non-starters for Gecko. I don't think
 we've heard anything from IE team.


 Well #4 has been accepted for ES6 by all TC39 participants including
 Mozilla and Microsoft and is going to happen.  The basic scheme was
 actually suggested a member of the SpiderMonkey team so I'm sure we'll get
 it worked out for Gecko.

 Allen







Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-12 Thread Daniel Buchner
@Scott - interesting, but there are valid reasons for having access to the
global scope/document:

   - Components that benefit from top-level delegation
   - Components that need to analyze their destination environment before
   codifying their definition

Know what I mean?

On Fri, Apr 12, 2013 at 2:25 PM, Scott Miles sjmi...@google.com wrote:

 I realize this doesn't fit any existing conceptual model (that I know of)
 but I think it's worth pointing out that all we really want to do is define
 a prototype for the element (as opposed to running arbitrary script).

 Invented pseudo-code (not a proposal, just trying to identify a mental
 model):

 element name=x-foo extends=something
   !-- prototype for markup --
   template
   template
   !-- prototype for instances --
   prototype
 readyCallback: function() {
 },
 someApi: function() {
 },
 someProperty: null
   /prototype
 /element


 On Fri, Apr 12, 2013 at 2:13 PM, Erik Arvidsson a...@chromium.org wrote:

 Daniel, what happens in this case?

 element name=x-foo
   script
 class XFoo extends SVGElement {
 }
   /script
 /element

 This points out 2 short comings with your proposal.

 1. This would just replace the global constructor. That might be OK, and
 we can detect this override to register the class as needed.
 2. Prefixing with HTML seems like an anti pattern.


 On Fri, Apr 12, 2013 at 5:08 PM, Daniel Buchner dan...@mozilla.comwrote:

 @John - what about what I just sent through? It hops over the magical
 rebinding issue (or so I think), your thoughts?


 On Fri, Apr 12, 2013 at 2:06 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:


 Some suggestions:

 On Fri, Apr 12, 2013 at 12:30 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 ... or How the heck do we initialize custom elements in declarative
 syntax?

 There were good questions raised about the nature of script element
 in the platonic form thread. Consider this syntax:

 element name=foo-bar


 element constructor=FooBar


 script .../script
 template ... /template
 /element

 The way element should work is like this:
 a) when /element is seen
 b) generate a constructor for this element


 b) call the nominated ctor, new FooBar(elt).

 I'm unclear on the practical advantages the instance inheriting from
 HTMLElement (new FooBar(), prototype inherits) vs manipulating the element
 from the outside (new FooBar(elt), prototype Object).



 b) run document.register
 c) run initialization code

  As I see it, the problem is twofold:

 1) The script element timing is weird. Since script is
 initialization code, it has to run after the /element is seen. This
 is already contrary to a typical script element expectations.


 Why? new FooBar() has to be called, but the outer init is anytime
 before that.



 2) The script element needs a way to refer to the custom element
 prototype it is initializing. Enclosing it in a function and calling
 it with element as |this| seemed like a simplest thing to do, but
 Rick and John had allergic reactions and had to be hospitalized.


 For me its the implicit declarative binding + wired in to 'this' that
 makes the
 original solution very magical and inflexible. I can understand
 ensuring that
 a component can be self-contained, but cannot understand why it needs an
 ordered and hierarchical when script isn't rendered.



 So far, I haven't seen any other workable alternatives. TC39 peeps and
 others, help me find them.


 Thanks for asking ;-)



 :DG






 --
 erik






Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-12 Thread Daniel Buchner
On Fri, Apr 12, 2013 at 3:20 PM, Dimitri Glazkov dglaz...@google.comwrote:

 2) since we do have to live with generated constructors -- at least
 for a little while, we have two decoupled operations:

 a) create prototype
 b) generate constructor.


Is this order required? Can we not generate the constructor, then let the
developer modify it using the same means they have since the beginning of
time - or if *they choose*, use slick ES6 syntax?


Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-11 Thread Daniel Buchner
*Shouldn't we prevent such a thing? I can't redefine a button's
template.  There should be some guarantee I'm getting the same x-foo
(API, look and feel) after it's been registered. What's the use case for
swapping in a new template?*

We've come across various occasions where we have a custom element, let's
say it's an x-todo, that has multiple content/view states (not something
achievable by a simple class/style change). There may be 200 of these
elements in a summary view in the list panel of the page in question.
When the user performs an action that requires an x-todo in summary view
to display a more complex, detail view, we've found that the easiest way
to reuse elements, make the change in the least amount of code, and avoid
the cumbersome and repetitive paradigm of creating specific
x-todo-summary and x-todo-detail elements for this purpose, is to
simply have two different template elements for each view type. The views
can be radically different in structure and visual style - yet thanks to
our ability to simply swap out the template association on the specific
x-todo in question, the whole thing is a snap: myTodoElement.template =
'detail-template';

Does this help clarify a bit?



On Wed, Apr 10, 2013 at 10:05 PM, Eric Bidelman ericbidel...@google.comwrote:

 Have to lean towards Raf and Daniel on this one. Making a element
 registation a concern of template doesn't feel right. In this case, explicit
 structure and a few more characters is worth it.

 On Wed, Apr 10, 2013 at 9:00 PM, Daniel Buchner dan...@mozilla.comwrote:

 It's incredibly important that we agree that association of a template
 with element happens on the element side, something like: element
 template=foo-template (or by placing the template inside element, if
 that is the API we want). I don't think this part is opinion, but because
 doing the reverse - marking on the template which element it refers to
 - hinders a few valid use-cases:

- one template from being used by many different elements
- changing a template association on a single instance of an element
type
   - say you have an x-foo on the page that you want to switch
   template associations on, but not for every other x-foo in the 
 document.
   Wouldn't this case be far more clear cut if you could just query for 
 the
   element and change some property? For instance: fooElement.template =
   'foo-template-2'; Boom! This particular foo element just switched 
 templates.

 Shouldn't we prevent such a thing? I can't redefine a button's
 template.  There should be some guarantee I'm getting the same x-foo
 (API, look and feel) after it's been registered. What's the use case for
 swapping in a new template?


-



 On Wed, Apr 10, 2013 at 8:19 PM, Daniel Buchner dan...@mozilla.comwrote:

 Here are a few (compelling?) answers/arguments:

1. Style elements had never done this before, yet it rocks socks:
style scoped
2. It would be new for script elements, but hardly new for other
elements. There are plenty of elements that have various behaviors or
visual representations only when placed inside specific elements. Given
this is already an advanced web API, I'm not sure a little upfront 
 learning
is a huge concern. We could even allow for this, given the paradigm is
already established: script scoped  *// could scope 'this' ref to
the parentNode*
3. Are you referring to template attachment here? If so, I agree,
thus the proposal I submitted allows for both (
https://gist.github.com/csuwldcat/5360471). If you want your
template automatically associated with your element, put it inside, if
not, you can specify which template a custom element should use by
reference to its ID.


 On Wed, Apr 10, 2013 at 8:00 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Wed, Apr 10, 2013 at 6:51 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 On Wed, Apr 10, 2013 at 6:38 PM, Rick Waldron waldron.r...@gmail.com
 wrote:
  Everyone's answer to this should be no; changing the expected
 value of the
  top level this, in some magical way, simply won't work.

 Can you explain why you feel this way?


 1) Because script has never done this before, so it better be
 compelling.
 2) Because causing |this| to change by moving the script tag in the
 HTML or adding a layer of elements etc seems likely to cause hard to
 understand bugs.
 3) Forcing the binding based on position is inflexible.

 To be sure this is implicit-declarative vs explicit-imperative bias,
 not evidence.

 Oh, sorry you were asking Rick.
  jjb







Re: [webcomponents]: Blocking custom elements on ES6, was: Platonic form of custom elements declarative syntax

2013-04-11 Thread Daniel Buchner
Err...polyfill doesn't mean you can do something via a different route that
produces a similar outcome, it means you back fill the API so that you can
use 1:1 syntax that never needs to know if the evn supports the native
underlying API or not.

I don't mean to disparage ES6 - as a former developer of the MooTools
framework (the most well recognized Class-y JS lib), I have a special
affinity for seeing a great Class abstraction land in JS. However, there's
no reason to hitch our wagon to ES6 and a pseudo-not-quite-fill. If people
want to use ES6 for defining Custom Elements, they can start doing so when
it is available in their target environment.




Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Thu, Apr 11, 2013 at 11:10 AM, Allen Wirfs-Brock
al...@wirfs-brock.comwrote:


 On Apr 11, 2013, at 10:59 AM, Dimitri Glazkov wrote:

  Hello, TC39 peeps! I am happy to have you and your expertise here.
 
  On Wed, Apr 10, 2013 at 11:14 PM, Allen Wirfs-Brock
  al...@wirfs-brock.com wrote:
 
  This can all be expresses, but less clearly and concisely using ES3/5
 syntax.  But since we are talking about a new HTML feature, I'd recommend
 being the first major HTMLfeature to embrace ES6 class syntax.  The class
 extension in ES6 are quite stable and quite easy to implement.  I'm pretty
 sure they will begin appearing in browsers sometime in the next 6 months.
 If webcomponents takes a dependency upon them, it would probably further
 speed up their implementation.
 
  We simply can't do this :-\ I see the advantages, but the drawbacks of
  tangled timelines and just plain not being able to polyfill custom
  elements are overwhelming. Right now, there are at least two thriving
  polyfills for custom elements
  (https://github.com/toolkitchen/CustomElements and
  https://github.com/mozilla/web-components), and they contribute
  greatly by both informing the spec development and evangelizing the
  concepts with web developers.
 
  To state simply: We must have support both ES3/5 and ES6 for custom
 elements.
 
  :DG
 

 ES6 classes can be pollyfilled:

   class Sub extends Super {
   constructor() {/*constructor body */ }
   method1 () {}
   static method2 {}
   }

 is:

  function Sub() {/*constructor body */ }
  Sub.__proto__ = Super;
  Sub.prototype = Object.create(Super.prototype);
  Sub.prototype.method1 = function method1() {};
  Sub.method2 = function method2 () {};

 Allen




Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Daniel Buchner
I have a counter proposal that takes into a count both the easy-to-declare,
1-to-1 case, as well as the 1-template-to-many-elements case:
https://gist.github.com/csuwldcat/5358039

I can explain the advantages a bit more in an hour or so, I just got pulled
into a meeting...le sigh.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Apr 10, 2013 at 12:40 PM, Scott Miles sjmi...@google.com wrote:

 No, strictly ergonomic. Less nesting and less characters (less nesting is
 more important IMO).

 I would also argue that there is less cognitive load on the author then
 the more explicit factoring, but I believe this is subjective.

 Scott


 On Wed, Apr 10, 2013 at 12:36 PM, Rafael Weinstein rafa...@google.comwrote:

 On Wed, Apr 10, 2013 at 11:47 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  Dear Webappsonites,
 
  There's been a ton of thinking on what the custom elements declarative
  syntax must look like. Here, I present something has near-ideal
  developer ergonomics at the expense of terrible sins in other areas.
  Consider it to be beacon, rather than a concrete proposal.
 
  First, let's cleanse your palate. Forget about the element element
  and what goes inside of it. Eat some parsley.
 
  == Templates Bound to Tags ==
 
  Instead, suppose you only have a template:
 
  template
  divYay!/div
  /template
 
  Templates are good for stamping things out, right? So let's invent a
  way to _bind_ a template to a _tag_. When the browser sees a tag to
  which the template is bound, it stamps the template out. Like so:
 
  1) Define a template and bind it to a tag name:
 
  template bindtotagname=my-yay
  divYay!/div
  /template
 
  2) Whenever my-yay is seen by the parser or
  createElement/NS(my-yay) is called, the template is stamped out to
  produce:
 
  my-yay
  divYay!/div
  /my-yay
 
  Cool! This is immediately useful for web developers. They can
  transform any markup into something they can use.
 
  Behind the scenes: the presence of boundtotagname triggers a call to
  document.register, and the argument is a browser-generated prototype
  object whose readyCallback takes the template and appends it to
  this.
 
  == Organic Shadow Trees  ==
 
  But what if they also wanted to employ encapsulation boundaries,
  leaving initial markup structure intact? No problem, much-maligned
  shadowroot to the rescue:
 
  1) Define a template with a shadow tree and bind it to a tag name:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  /template
 
  2) For each my-yay created, the template is stamped out to create a
  shadow root and populate it.
 
  Super-cool! Note, how the developer doesn't have to know anything
  about Shadow DOM to build custom elements (er, template-bound tags).
  Shadow trees are just an option.
 
  Behind the scenes: exactly the same as the first scenario.
 
  == Declarative Meets Imperative ==
 
  Now, the developer wants to add some APIs to my-yay. Sure, no problem:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  script runwhenbound
  // runs right after document.register is triggered
  this.register(ExactSyntaxTBD);
  script
  /template
 
  So-cool-it-hurts! We built a fully functional custom element, taking
  small steps from an extremely simple concept to the full-blown thing.
 
  In the process, we also saw a completely decoupled shadow DOM from
  custom elements in both imperative and declarative forms, achieving
  singularity. Well, or at least a high degree of consistence.
 
  == Problems ==
 
  There are severe issues.
 
  The shadowroot is turning out to be super-magical.
 
  The bindtotagname attribute will need to be also magical, to be
  consistent with how document.register could be used.
 
  The stamping out, after clearly specified, may raise eyebrows and
  turn out to be unintuitive.
 
  Templates are supposed to be inert, but the whole script
  runwhenbound thing is strongly negating this. There's probably more
  that I can't remember now.

 The following expresses the same semantics:

 element tagname=my-yay
   template
 shadowroot
   divYay!/div
 /shadowroot
   /template
   script runwhenbound
   /script
 /element

 I get that your proposal is fewer characters to type. Are there other
 advantages?

 
  == Plea ==
 
  However, I am hopeful that you smart folk will look at this, see the
  light, tweak the idea just a bit and hit the homerun. See the light,
  dammit!
 
  :DG





Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Daniel Buchner
*What about CSP that forbids inline
scripts?https://wiki.mozilla.org/Apps/Security#Default_CSP_policy
*

Is there any reason developers wouldn't just modify the script tag under
either method proposed to use src=link-to-non-inline-script to satisfy
CSP requirements? The proposal I submitted certainly doesn't exclude that
ability/use case (or so I thought - correct if wrong)


Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Apr 10, 2013 at 1:27 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 10, 2013 at 4:15 PM, Daniel Buchner dan...@mozilla.comwrote:

 I have a counter proposal that takes into a count both the
 easy-to-declare, 1-to-1 case, as well as the 1-template-to-many-elements
 case: https://gist.github.com/csuwldcat/5358039



 What about CSP that forbids inline scripts?

 https://wiki.mozilla.org/Apps/Security#Default_CSP_policy

 Rick




 I can explain the advantages a bit more in an hour or so, I just got
 pulled into a meeting...le sigh.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Wed, Apr 10, 2013 at 12:40 PM, Scott Miles sjmi...@google.com wrote:

 No, strictly ergonomic. Less nesting and less characters (less nesting
 is more important IMO).

 I would also argue that there is less cognitive load on the author then
 the more explicit factoring, but I believe this is subjective.

 Scott


 On Wed, Apr 10, 2013 at 12:36 PM, Rafael Weinstein 
 rafa...@google.comwrote:

 On Wed, Apr 10, 2013 at 11:47 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  Dear Webappsonites,
 
  There's been a ton of thinking on what the custom elements declarative
  syntax must look like. Here, I present something has near-ideal
  developer ergonomics at the expense of terrible sins in other areas.
  Consider it to be beacon, rather than a concrete proposal.
 
  First, let's cleanse your palate. Forget about the element element
  and what goes inside of it. Eat some parsley.
 
  == Templates Bound to Tags ==
 
  Instead, suppose you only have a template:
 
  template
  divYay!/div
  /template
 
  Templates are good for stamping things out, right? So let's invent a
  way to _bind_ a template to a _tag_. When the browser sees a tag to
  which the template is bound, it stamps the template out. Like so:
 
  1) Define a template and bind it to a tag name:
 
  template bindtotagname=my-yay
  divYay!/div
  /template
 
  2) Whenever my-yay is seen by the parser or
  createElement/NS(my-yay) is called, the template is stamped out to
  produce:
 
  my-yay
  divYay!/div
  /my-yay
 
  Cool! This is immediately useful for web developers. They can
  transform any markup into something they can use.
 
  Behind the scenes: the presence of boundtotagname triggers a call to
  document.register, and the argument is a browser-generated prototype
  object whose readyCallback takes the template and appends it to
  this.
 
  == Organic Shadow Trees  ==
 
  But what if they also wanted to employ encapsulation boundaries,
  leaving initial markup structure intact? No problem, much-maligned
  shadowroot to the rescue:
 
  1) Define a template with a shadow tree and bind it to a tag name:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  /template
 
  2) For each my-yay created, the template is stamped out to create a
  shadow root and populate it.
 
  Super-cool! Note, how the developer doesn't have to know anything
  about Shadow DOM to build custom elements (er, template-bound tags).
  Shadow trees are just an option.
 
  Behind the scenes: exactly the same as the first scenario.
 
  == Declarative Meets Imperative ==
 
  Now, the developer wants to add some APIs to my-yay. Sure, no
 problem:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  script runwhenbound
  // runs right after document.register is triggered
  this.register(ExactSyntaxTBD);
  script
  /template
 
  So-cool-it-hurts! We built a fully functional custom element, taking
  small steps from an extremely simple concept to the full-blown thing.
 
  In the process, we also saw a completely decoupled shadow DOM from
  custom elements in both imperative and declarative forms, achieving
  singularity. Well, or at least a high degree of consistence.
 
  == Problems ==
 
  There are severe issues.
 
  The shadowroot is turning out to be super-magical.
 
  The bindtotagname attribute will need to be also magical, to be
  consistent with how document.register could be used.
 
  The stamping out, after clearly specified, may raise eyebrows and
  turn out to be unintuitive.
 
  Templates are supposed to be inert, but the whole script
  runwhenbound thing is strongly negating this. There's probably more
  that I can't remember now.

 The following expresses the same semantics:

 element tagname=my-yay
   template
 shadowroot
   divYay!/div
 /shadowroot
   /template

Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Daniel Buchner
One thing I'm wondering re template elements and the association of a
specific script with them, is what is it really doing for me? From what I
see, not much. It seems the only thing it does, is allows you to have the
generic, globally-scoped script run at a given time (via a new runwhen___
attribute) and the implicit relationship created by inclusion within the
template element itself - which is essentially no different than just
setting a global delegate in any 'ol script tag on the page.

Are there show-stopper issues with empowering the script tags inside
template elements to be a bit more powerful? (think local instance
scoping, auto template event unbinding on removal, or any other helpful
additions) If there are issues with making these script tags behave a bit
different, then what is the compelling value proposition vs something like
this: https://gist.github.com/csuwldcat/5358612 ?


On Wed, Apr 10, 2013 at 1:43 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 10, 2013 at 4:38 PM, Daniel Buchner dan...@mozilla.comwrote:

 *What about CSP that forbids inline 
 scripts?https://wiki.mozilla.org/Apps/Security#Default_CSP_policy
 *

 Is there any reason developers wouldn't just modify the script tag under
 either method proposed to use src=link-to-non-inline-script to satisfy
 CSP requirements? The proposal I submitted certainly doesn't exclude that
 ability/use case (or so I thought - correct if wrong)


 There is nothing stopping that at all.

 A bigger issue with proposal is that the global object appears to be the
 element's instance object itself, which isn't going to work

 Rick




 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Wed, Apr 10, 2013 at 1:27 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 10, 2013 at 4:15 PM, Daniel Buchner dan...@mozilla.comwrote:

 I have a counter proposal that takes into a count both the
 easy-to-declare, 1-to-1 case, as well as the 1-template-to-many-elements
 case: https://gist.github.com/csuwldcat/5358039



 What about CSP that forbids inline scripts?

 https://wiki.mozilla.org/Apps/Security#Default_CSP_policy

 Rick




 I can explain the advantages a bit more in an hour or so, I just got
 pulled into a meeting...le sigh.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Wed, Apr 10, 2013 at 12:40 PM, Scott Miles sjmi...@google.comwrote:

 No, strictly ergonomic. Less nesting and less characters (less nesting
 is more important IMO).

 I would also argue that there is less cognitive load on the author
 then the more explicit factoring, but I believe this is subjective.

 Scott


 On Wed, Apr 10, 2013 at 12:36 PM, Rafael Weinstein rafa...@google.com
  wrote:

 On Wed, Apr 10, 2013 at 11:47 AM, Dimitri Glazkov 
 dglaz...@google.com wrote:
  Dear Webappsonites,
 
  There's been a ton of thinking on what the custom elements
 declarative
  syntax must look like. Here, I present something has near-ideal
  developer ergonomics at the expense of terrible sins in other areas.
  Consider it to be beacon, rather than a concrete proposal.
 
  First, let's cleanse your palate. Forget about the element element
  and what goes inside of it. Eat some parsley.
 
  == Templates Bound to Tags ==
 
  Instead, suppose you only have a template:
 
  template
  divYay!/div
  /template
 
  Templates are good for stamping things out, right? So let's invent a
  way to _bind_ a template to a _tag_. When the browser sees a tag to
  which the template is bound, it stamps the template out. Like so:
 
  1) Define a template and bind it to a tag name:
 
  template bindtotagname=my-yay
  divYay!/div
  /template
 
  2) Whenever my-yay is seen by the parser or
  createElement/NS(my-yay) is called, the template is stamped out to
  produce:
 
  my-yay
  divYay!/div
  /my-yay
 
  Cool! This is immediately useful for web developers. They can
  transform any markup into something they can use.
 
  Behind the scenes: the presence of boundtotagname triggers a call
 to
  document.register, and the argument is a browser-generated prototype
  object whose readyCallback takes the template and appends it to
  this.
 
  == Organic Shadow Trees  ==
 
  But what if they also wanted to employ encapsulation boundaries,
  leaving initial markup structure intact? No problem, much-maligned
  shadowroot to the rescue:
 
  1) Define a template with a shadow tree and bind it to a tag name:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  /template
 
  2) For each my-yay created, the template is stamped out to create
 a
  shadow root and populate it.
 
  Super-cool! Note, how the developer doesn't have to know anything
  about Shadow DOM to build custom elements (er, template-bound tags).
  Shadow trees are just an option.
 
  Behind the scenes: exactly the same as the first scenario.
 
  == Declarative Meets Imperative ==
 
  Now, the developer wants to add

Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Daniel Buchner
It's incredibly important that we agree that association of a template
with element happens on the element side, something like: element
template=foo-template (or by placing the template inside element, if
that is the API we want). I don't think this part is opinion, but because
doing the reverse - marking on the template which element it refers to
- hinders a few valid use-cases:

   - one template from being used by many different elements
   - changing a template association on a single instance of an element type
  - say you have an x-foo on the page that you want to switch
  template associations on, but not for every other x-foo in the
document.
  Wouldn't this case be far more clear cut if you could just query for the
  element and change some property? For instance: fooElement.template =
  'foo-template-2'; Boom! This particular foo element just
switched templates.



On Wed, Apr 10, 2013 at 8:19 PM, Daniel Buchner dan...@mozilla.com wrote:

 Here are a few (compelling?) answers/arguments:

1. Style elements had never done this before, yet it rocks socks:
style scoped
2. It would be new for script elements, but hardly new for other
elements. There are plenty of elements that have various behaviors or
visual representations only when placed inside specific elements. Given
this is already an advanced web API, I'm not sure a little upfront learning
is a huge concern. We could even allow for this, given the paradigm is
already established: script scoped  *// could scope 'this' ref to
the parentNode*
3. Are you referring to template attachment here? If so, I agree,
thus the proposal I submitted allows for both (
https://gist.github.com/csuwldcat/5360471). If you want your template
automatically associated with your element, put it inside, if not, you
can specify which template a custom element should use by reference to
its ID.


 On Wed, Apr 10, 2013 at 8:00 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Wed, Apr 10, 2013 at 6:51 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Wed, Apr 10, 2013 at 6:38 PM, Rick Waldron waldron.r...@gmail.com
 wrote:
  Everyone's answer to this should be no; changing the expected value
 of the
  top level this, in some magical way, simply won't work.

 Can you explain why you feel this way?


 1) Because script has never done this before, so it better be
 compelling.
 2) Because causing |this| to change by moving the script tag in the
 HTML or adding a layer of elements etc seems likely to cause hard to
 understand bugs.
 3) Forcing the binding based on position is inflexible.

 To be sure this is implicit-declarative vs explicit-imperative bias, not
 evidence.

 Oh, sorry you were asking Rick.
  jjb





Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Daniel Buchner
Here are a few (compelling?) answers/arguments:

   1. Style elements had never done this before, yet it rocks socks: style
   scoped
   2. It would be new for script elements, but hardly new for other
   elements. There are plenty of elements that have various behaviors or
   visual representations only when placed inside specific elements. Given
   this is already an advanced web API, I'm not sure a little upfront learning
   is a huge concern. We could even allow for this, given the paradigm is
   already established: script scoped  *// could scope 'this' ref to the
   parentNode*
   3. Are you referring to template attachment here? If so, I agree, thus
   the proposal I submitted allows for both (
   https://gist.github.com/csuwldcat/5360471). If you want your template
   automatically associated with your element, put it inside, if not, you
   can specify which template a custom element should use by reference to
   its ID.


On Wed, Apr 10, 2013 at 8:00 PM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Wed, Apr 10, 2013 at 6:51 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Wed, Apr 10, 2013 at 6:38 PM, Rick Waldron waldron.r...@gmail.com
 wrote:
  Everyone's answer to this should be no; changing the expected value
 of the
  top level this, in some magical way, simply won't work.

 Can you explain why you feel this way?


 1) Because script has never done this before, so it better be compelling.
 2) Because causing |this| to change by moving the script tag in the HTML
 or adding a layer of elements etc seems likely to cause hard to understand
 bugs.
 3) Forcing the binding based on position is inflexible.

 To be sure this is implicit-declarative vs explicit-imperative bias, not
 evidence.

 Oh, sorry you were asking Rick.
  jjb



Re: [webcomponents]: What callbacks do custom elements need?

2013-03-12 Thread Daniel Buchner
*Daniel can confirm but in all of the stuff i have seen and played with so
far it is...you want a changing a component attribute to have some effect.
Internally you would use mutation observers i think.*

-

We have built quite a few custom elements now, and here's the
role/behaviors of attributes in doing so:


   - Attributes act as initial/dynamic settings parameters for the state or
   mode of a custom element, with the advantage they can be targeted with CSS
   (just like native elements)
   - Some attributes require linkage to mirrored getters and setters that
   trigger common actions when set either way - src, type, href, etc
   - Attributes linked to getters and setters must not set off attribute
   changed events when being set by code internal to the custom element
   definition. For instance, if you have someone change a property like src
   via setter, you don't want it to sync that value to the scr= attribute
   and cause the linked action to occur twice - get the idea?

It does seem as though it will be quite difficult to come up with a
non-sync API for this that behaves in a sane manner devs can rely on. I'm
all ears for solutions though! :)

- Daniel


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-12 Thread Daniel Buchner
What about obscured, opaque, invisible, or restricted?


On Tue, Mar 12, 2013 at 3:34 PM, Alan Stearns stea...@adobe.com wrote:

 On 3/12/13 2:41 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/12/13 5:19 PM, Dimitri Glazkov wrote:
  However, to allow developers a degree of enforcing integrity of their
  shadow trees, we are going add a new mode, an equivalent of a KEEP OUT
  sign, if you will, which will makes a shadow tree non-traversable,
  effectively skipping over it in an element's shadow tree stack.
 
 To be clear, what this mode does is turn off the simple way of getting
 the shadow tree.  It does not promise that someone can't get at the
 shadow tree via various non-obvious methods, because in practice such
 promises are empty as long as script inside the component runs against
 the web page global.
 
 The question is how to name this.  Hidden seems to promise too much to
 me.  Perhaps obfuscated?  Veiled?
 
 -Boris
 
 P.S.  Tempting as it is, RedWithGreenPolkadots is probably not an OK
 name for this bikeshed.

 Apologies in advance for adding to the bikeshedding

 protected (mostly private, but you can get around it)
 shielded (the shield can be lowered)
 gated (the gate can be opened)
 fenced (most fences have an opening)

 Or bleenish-grue, if we're going with color names.

 Alan




Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Daniel Buchner
Ready/created, inserted, removed, and attributeChanged are the minimum
must-havs for developers - we heavily rely on each one of these callbacks
in the components we've developed thus far. The usefulness of this API is
neutered without these hooks - they're table stakes, plain and simple.
Jonas, how are non-bubbling callbacks so crushing? Are we honestly
designing for the dev who decides to ignore all the best practices,
tutorials, evangelist demos, etc and run a crushing loop every time an
attribute value changes despite the obvious idiocy of their actions? This
is not an API that will be widely used by every Bobby Tables and Samantha
Script Kitty on the block - let's not design API features and 99%-case
ergonomics for a phantom developer persona that is a fringe-at-best factor.

On Fri, Mar 8, 2013 at 10:49 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mar 6, 2013 2:07 PM, Dimitri Glazkov dglaz...@google.com wrote:
 
  Here are all the callbacks that we could think of:
 
  * readyCallback (artist formerly known as create) -- called when the
  element is instantiated with generated constructor, createElement/NS
  or shortly after it was instantiated and placed in a tree during
  parser tree construction
 
  * attributeChangedCallback -- synchronously called when an attribute
  of an element is added, removed, or modified

 This will have many of the same problems that mutation events had. I
 believe we want to really stay away from synchronous.

 So yes, this looks dangerous and crazy :-)

 / Jonas



Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Daniel Buchner
Just to be clear, these are callbacks (right?), meaning synchronous
executions on one specific node. That is a far cry from the old issues with
mutation events and nightmarish bubbling scenarios. Come on folks, let's be
pragmatic and honest, there are far worse events in use today, and the web
is still alive and kicking: mousemove, mouseover, mouseout, resize, etc.
Most/all of these kinds of existing, perf-intensive events bubble and are
fired with far more frequency than attribute changes on a node - further,
devs can always debounce the calls if they please.

Devs probably don't care if these things are fired sync, as long as when
they do fire, they fire once and the value/changes aren't stale. Honestly,
ready/created, inserted, and removed wouldn't be an issue if they map to
micro tasks, right? (as long as the final callback is not stale). If
attribute change callbacks are your worry, Brian Kardell mentioned
something sensible, give people a way to whitelist just the ones that they
want to watch for - assuming that is faster than simply telling devs to use
basic if/switch logic inside the callback.

There are many options here, let's not have an API-degrading, whopper
freakout over a nascent concern of a fringe perf issue that is, to a large
extent, self-mitigating/healing. (devs generally don't want to have slow
crappy apps - built-in incentives to not suck are usually the best kind)


Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Mon, Mar 11, 2013 at 11:55 AM, Daniel Buchner dan...@mozilla.com wrote:

 Ready/created, inserted, removed, and attributeChanged are the minimum
 must-havs for developers - we heavily rely on each one of these callbacks
 in the components we've developed thus far. The usefulness of this API is
 neutered without these hooks - they're table stakes, plain and simple.
 Jonas, how are non-bubbling callbacks so crushing? Are we honestly
 designing for the dev who decides to ignore all the best practices,
 tutorials, evangelist demos, etc and run a crushing loop every time an
 attribute value changes despite the obvious idiocy of their actions? This
 is not an API that will be widely used by every Bobby Tables and Samantha
 Script Kitty on the block - let's not design API features and 99%-case
 ergonomics for a phantom developer persona that is a fringe-at-best factor.


 On Fri, Mar 8, 2013 at 10:49 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mar 6, 2013 2:07 PM, Dimitri Glazkov dglaz...@google.com wrote:
 
  Here are all the callbacks that we could think of:
 
  * readyCallback (artist formerly known as create) -- called when the
  element is instantiated with generated constructor, createElement/NS
  or shortly after it was instantiated and placed in a tree during
  parser tree construction
 
  * attributeChangedCallback -- synchronously called when an attribute
  of an element is added, removed, or modified

 This will have many of the same problems that mutation events had. I
 believe we want to really stay away from synchronous.

 So yes, this looks dangerous and crazy :-)

 / Jonas





Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Daniel Buchner
 Just to be clear, these are callbacks (right?), meaning synchronous
 executions on one specific node. That is a far cry from the old issues
 with mutation events and nightmarish bubbling scenarios.

Where does bubbling come in?

- I thought the concern was over the same issues that plagued mutation
*events*, namely perf-crushing event bubbling sparked by frequent DOM
changes.


But the issues with synchronous callbacks are not about performance,
last I checked, so I'm not sure why you're setting up this strawman.  Or
are people arguing something here based on performance considerations
that I missed?

- I wasn't aware of setting up such a strawman, I honestly thought the
issue was perf. I thought this because Jonas said this: This will
have many of the same problems that mutation events had. I believe we
want to really stay away from synchronous. So yes, this looks
dangerous and crazy

-

 as long as when they do fire, they fire once and the value/changes aren't 
 stale.

Not sure what you mean here.

- I mean, that it doesn't matter if you make all the callbacks fire at
the end of a micro task, as long as you're not reporting old,
irrelevant changes to the developer - for instance: if an attribute
foo changes 3 times before the micro task is finished, the browser
should resolve the mutation record set to only fire one callback for
the last fresh occurrence of the mutation/action in question.

 Honestly, ready/created, inserted, and removed wouldn't be an issue if
 they map to micro tasks, right?

Running script at end of microtask is generally fine by me, since it
doesn't have the issues that running script sync does.

- Great, so what's the problem here? Are we officially in violent agreement? :)

-

 If attribute change callbacks are your worry, Brian Kardell
 mentioned something sensible, give people a way to whitelist just the
 ones that they want to watch for - assuming that is faster than simply
 telling devs to use basic if/switch logic inside the callback.

It's almost certainly faster but may not be worth the machinery,
depending on how much this is actually going to be used in practice.

- these four basic callbacks/mutations are essential to custom
element/component development - they will be used frequently, in our
experience thus far.

-

 There are many options here, let's not have an API-degrading, whopper
 freakout over a nascent concern of a fringe perf issue that is, to a
 large extent, self-mitigating/healing. (devs generally don't want to
 have slow crappy apps - built-in incentives to not suck are usually the
 best kind)

You and I seem to have different opinions on whether people care about
their code being a resource hog... Note that the typical culprits for
not caring are not apps, though.

- The target and actual audience is probably not going to be
noobs/beginners, this is a pretty complex API - it appears you agree:
the typical culprits for not caring are not apps.

- Daniel


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Daniel Buchner
inserted and removed can probably be end of micro task, but
attributeChanged definitely needs to be synchronous to model the behavior
of input type where changing it from X to Y has an immediate effect on
the APIs available (like stepUp).

Actually, I disagree. Attribute changes need not be assessed syncronously,
as long as they are evaluated before critical points, such as before paint
(think requestAnimationFrame timing). Can you provide a common, real-world
example of where queued timing would not work?

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Mon, Mar 11, 2013 at 2:18 PM, Elliott Sprehn espr...@gmail.com wrote:

 On Wed, Mar 6, 2013 at 5:36 PM, Boris Zbarsky bzbar...@mit.edu wrote:

  On 3/6/13 5:05 PM, Dimitri Glazkov wrote:

 * attributeChangedCallback -- synchronously called when an attribute
 of an element is added, removed, or modified


 Synchronously in what sense?  Why are mutation observers not sufficient
 here?


 * insertedCallback -- asynchronously called when an element is added
 to document tree (TBD: is this called when parser is constructing the
 tree?)


 Again, why is this not doable with mutation observers?


 inserted and removed can probably be end of micro task, but
 attributeChanged definitely needs to be synchronous to model the behavior
 of input type where changing it from X to Y has an immediate effect on
 the APIs available (like stepUp).

 MutationObservers are not sufficient because childList mutations are about
 children, but you want to observe when *yourself* is added or removed from
 the Document tree. There's also no inserted into document and removed
 from document mutation records, and since ShadowRoot has no host
 property there's also no way to walk up to the root to find out if you're
 actually in the document. (Dimtiri should fix this... I hope).

 The ready callback should probably also be synchronous (but at least it
 happens in script invocation of the new operator, or after tree building),
 since you want your widget to be usable immediately.

 - E




Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Daniel Buchner
I am certainly aware of existing elements that have synchronous actions
triggered as a result of attribute changes. Are you specifically worried
about cases where you inherit from an existing element using the is=
syntax and require immediate notification of an attribute change because
you want to modify or prevent the way the native element behaves in
response to native attribute changes?

In retrospect, that is a valid point. Is this an instance of an important
use-case/requirement colliding with a hard constraint? Can someone with
more expertise on the platform side explain the options we have in
addressing the case Elliot presents?

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Mon, Mar 11, 2013 at 2:42 PM, Elliott Sprehn espr...@gmail.com wrote:

 On Mon, Mar 11, 2013 at 2:32 PM, Daniel Buchner dan...@mozilla.comwrote:

 inserted and removed can probably be end of micro task, but
 attributeChanged definitely needs to be synchronous to model the behavior
 of input type where changing it from X to Y has an immediate effect on
 the APIs available (like stepUp).

 Actually, I disagree. Attribute changes need not be assessed
 syncronously, as long as they are evaluated before critical points, such as
 before paint (think requestAnimationFrame timing). Can you provide a
 common, real-world example of where queued timing would not work?



 Yes, I already gave one. Where you go from input type=text to input
 type=range and then stepUp() suddenly starts working.

 I guess we could force people to use properties here, but that doesn't
 model how the platform itself works.

 An even more common example is iframe src. Setting a different @src
 value synchronously navigates the frame. Also inserting an iframe into
 the page synchronously loads an about:blank document.

 Neither of theses cases are explained by the end-of-microtask behavior
 you're describing.

 - E



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-19 Thread Daniel Buchner
What is the harm in returning the same constructor that is being input for
this form of invocation? The output constructor is simply a pass-through of
the input constructor, right?

FOO_CONSTRUCTOR = document.register(‘x-foo’, {
  constructor: FOO_CONSTRUCTOR
});

I guess this isn't a big deal though, I'll certainly defer to you all on
the best course :)

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Tue, Feb 19, 2013 at 12:51 PM, Scott Miles sjmi...@google.com wrote:

  I'd be a much happier camper if I didn't have to think about handling
 different return values.

 I agree, and If it were up to me, there would be just one API for
 document.register.

 However, the argument given for dividing the API is that it is improper to
 have a function return a value that is only important on some platforms. If
 that's the winning argument, then isn't it pathological to make the 'non
 constructor-returning API' return a constructor?


 On Mon, Feb 18, 2013 at 12:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 I agree with your approach on staging the two specs for this, but the
 last part about returning a constructor in one circumstance and undefined
 in the other is something developers would rather not deal with (in my
 observation). If I'm a downstream consumer or library author who's going to
 wrap this function (or any function for that matter), I'd be a much happier
 camper if I didn't have to think about handling different return values. Is
 there a clear harm in returning a constructor reliably that would make us
 want to diverge from an expected and reliable return value? It seems to me
 that the unexpected return value will be far more annoying than a little
 less mental separation between the two invocation setups.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing
 does not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option that
 takes
  multiple input types is not an empirical negative, you may argue it is
 an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_ route is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG






Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-19 Thread Daniel Buchner
Wait a sec, perhaps I've missed something, but in your example you never
extend the actual native header element, was that on purpose? I was under
the impression you still needed to inherit from it in the prototype
creation/registration phase, is that not true?
On Feb 19, 2013 8:26 PM, Scott Miles sjmi...@google.com wrote:

 Question: if I do

 FancyHeaderPrototype = Object.create(HTMLElement.prototype);
 document.register('fancy-header', {
   prototype: FancyHeaderPrototype
 ...

 In this case, I intend to extend header. I expect my custom elements to
 look like header is=fancy-header, but how does the system know what
 localName to use? I believe the notion was that the localName would be
 inferred from the prototype, but there are various semantic tags that share
 prototypes, so it seems ambiguous in these cases.

 S


 On Tue, Feb 19, 2013 at 1:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 What is the harm in returning the same constructor that is being input
 for this form of invocation? The output constructor is simply a
 pass-through of the input constructor, right?

 FOO_CONSTRUCTOR = document.register(‘x-foo’, {
   constructor: FOO_CONSTRUCTOR
 });

 I guess this isn't a big deal though, I'll certainly defer to you all on
 the best course :)

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Tue, Feb 19, 2013 at 12:51 PM, Scott Miles sjmi...@google.com wrote:

  I'd be a much happier camper if I didn't have to think about handling
 different return values.

 I agree, and If it were up to me, there would be just one API for
 document.register.

 However, the argument given for dividing the API is that it is improper
 to have a function return a value that is only important on some platforms. 
 If
 that's the winning argument, then isn't it pathological to make the 'non
 constructor-returning API' return a constructor?


 On Mon, Feb 18, 2013 at 12:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 I agree with your approach on staging the two specs for this, but the
 last part about returning a constructor in one circumstance and undefined
 in the other is something developers would rather not deal with (in my
 observation). If I'm a downstream consumer or library author who's going to
 wrap this function (or any function for that matter), I'd be a much happier
 camper if I didn't have to think about handling different return values. Is
 there a clear harm in returning a constructor reliably that would make us
 want to diverge from an expected and reliable return value? It seems to me
 that the unexpected return value will be far more annoying than a little
 less mental separation between the two invocation setups.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing
 does not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option that
 takes
  multiple input types is not an empirical negative, you may argue it
 is an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_ route
 is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG








Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-19 Thread Daniel Buchner
Nope, you're 100% right, I saw *header *and thought HTML*Heading*Element
for some reason - so this seems like a valid concern. What are the
mitigation/solution options we can present to developers for this case?


Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Tue, Feb 19, 2013 at 9:17 PM, Scott Miles sjmi...@google.com wrote:

 Perhaps I'm making a mistake, but there is no specific prototype for the
 native header element. 'header', 'footer', 'section', e.g., are all
 HTMLElement, so all I can do is

 FancyHeaderPrototype = Object.create(HTMLElement.prototype);

 Afaict, the 'headerness' cannot be expressed this way.


 On Tue, Feb 19, 2013 at 8:34 PM, Daniel Buchner dan...@mozilla.comwrote:

 Wait a sec, perhaps I've missed something, but in your example you never
 extend the actual native header element, was that on purpose? I was under
 the impression you still needed to inherit from it in the prototype
 creation/registration phase, is that not true?
  On Feb 19, 2013 8:26 PM, Scott Miles sjmi...@google.com wrote:

 Question: if I do

 FancyHeaderPrototype = Object.create(HTMLElement.prototype);
 document.register('fancy-header', {
   prototype: FancyHeaderPrototype
 ...

 In this case, I intend to extend header. I expect my custom elements
 to look like header is=fancy-header, but how does the system know what
 localName to use? I believe the notion was that the localName would be
 inferred from the prototype, but there are various semantic tags that share
 prototypes, so it seems ambiguous in these cases.

 S


 On Tue, Feb 19, 2013 at 1:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 What is the harm in returning the same constructor that is being input
 for this form of invocation? The output constructor is simply a
 pass-through of the input constructor, right?

 FOO_CONSTRUCTOR = document.register(‘x-foo’, {
   constructor: FOO_CONSTRUCTOR
 });

 I guess this isn't a big deal though, I'll certainly defer to you all
 on the best course :)

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Tue, Feb 19, 2013 at 12:51 PM, Scott Miles sjmi...@google.comwrote:

  I'd be a much happier camper if I didn't have to think about
 handling different return values.

 I agree, and If it were up to me, there would be just one API for
 document.register.

 However, the argument given for dividing the API is that it is
 improper to have a function return a value that is only important on some
 platforms. If that's the winning argument, then isn't it pathological
 to make the 'non constructor-returning API' return a constructor?


 On Mon, Feb 18, 2013 at 12:59 PM, Daniel Buchner 
 dan...@mozilla.comwrote:

 I agree with your approach on staging the two specs for this, but the
 last part about returning a constructor in one circumstance and undefined
 in the other is something developers would rather not deal with (in my
 observation). If I'm a downstream consumer or library author who's going 
 to
 wrap this function (or any function for that matter), I'd be a much 
 happier
 camper if I didn't have to think about handling different return values. 
 Is
 there a clear harm in returning a constructor reliably that would make us
 want to diverge from an expected and reliable return value? It seems to 
 me
 that the unexpected return value will be far more annoying than a little
 less mental separation between the two invocation setups.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov 
 dglaz...@google.com wrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing
 does not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option
 that takes
  multiple input types is not an empirical negative, you may argue
 it is an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is
 a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_
 route is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API
 variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG









Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-18 Thread Daniel Buchner
I agree with your approach on staging the two specs for this, but the last
part about returning a constructor in one circumstance and undefined in the
other is something developers would rather not deal with (in my
observation). If I'm a downstream consumer or library author who's going to
wrap this function (or any function for that matter), I'd be a much happier
camper if I didn't have to think about handling different return values. Is
there a clear harm in returning a constructor reliably that would make us
want to diverge from an expected and reliable return value? It seems to me
that the unexpected return value will be far more annoying than a little
less mental separation between the two invocation setups.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing does
 not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option that
 takes
  multiple input types is not an empirical negative, you may argue it is an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_ route is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-15 Thread Daniel Buchner
I'm not sure I buy the idea that two ways of doing the same thing does not
seem like a good approach - the web platform's imperative and declarative
duality is, by nature, two-way. Having two methods or an option that takes
multiple input types is not an empirical negative, you may argue it is an
ugly pattern, but that is largely subjective.

Is this an accurate summary of what we're looking at for possible
solutions? If so, can we at least get a decision on whether or not _this_
route is acceptable?

FOO_CONSTRUCTOR = document.register(‘x-foo’, {
  prototype: ELEMENT_PROTOTYPE,
  lifecycle: {
 created: CALLBACK
  }
});

FOO_CONSTRUCTOR = document.register(‘x-foo’, {
  constructor: FOO_CONSTRUCTOR
});





Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Fri, Feb 15, 2013 at 6:19 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Feb 14, 2013 at 9:48 PM, Dimitri Glazkov dglaz...@google.com
 wrote:
  What do you think?

 It seems like this still requires magic for document.createElement()
 and document.createElementNS().

 Also, providing two ways of doing the same thing does not seem like a
 good approach to standardization and will come to haunt us in the
 future (in terms of maintenance, QA, new extensions to the platform,
 etc.).


 --
 http://annevankesteren.nl/



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Daniel Buchner
I love it, gives the developer control over the addition of sugar (just a
spoonful of...) and code preference, while at the same time addressing our
requirement set. Ship it!

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Thu, Feb 14, 2013 at 1:48 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Folks,

 I propose just a bit of sugaring as a compromise, but I want to make
 sure this is really sugar and not acid, so please chime in.

 1) We give up on unified syntax for ES5 and ES6, and instead focus on
 unified plumbing
 2) document.register returns a custom element constructor as a result,
 just like currently specified:

 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-document-register
 3) There are two ways to register an element: with a constructor and
 with a prototype object.
 4) When registering with the constructor (aka the ES6 way), you must
 supply the constructor/class as the constructor member in the
 ElementRegistrationOptions dictionary
 (
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#api-element-registration-options
 )
 5) If the constructor is supplied, element registration overrides
 [[Construct]] internal function as described in
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0250.html
 6) Registering with a prototype object (aka the current way) uses the
 prototype member in ElementRegistrationOptions dictionary and works
 roughly as currently specified
 7) If the prototype object is supplied, the constructor is generated
 as two steps:
   a) Instantiate the platform object
   b) Call created callback from lifecycle callback interface bound to
 this
 8) We remove any sort of shadow tree creation and the corresponding
 template argument from the spec. Shadow tree management is left
 completely up to the author.

 Effectively, the created callback becomes the poor man's
 constructor. It's very easy to convert from old syntax to new syntax:

 The prototype way:

 function MyButton() {
   // do constructor stuff ...
 }
 MyButton.prototype = Object.create(HTMLButtonElement.prototype, {
  ...
 });
 MyButton = document.register(‘x-button’, {
   prototype: MyButton.prototype,
   lifecycle: {
  created: MyButton
   }
 });

 The constructor way:

 function MyButton() {
  // do constructor stuff ...
 }
 MyButton.prototype = Object.create(HTMLButtonElement.prototype, {
  ...
 });
 document.register(‘x-button’, {
  constructor: MyButton,
  ...
 });

 This is nearly the same approach as what  Scott sketched out here:
 http://jsfiddle.net/aNHZH/7/, so we already know it's shimmable :)

 What do you think?

 :DG



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Daniel Buchner
It seems to me (please correct me if this is inaccurate) that you can't *
really* polyfill ES6 extension of existing element constructor inheritance,
because afaik, you cannot call the existing native constructors of elements
- it throws. So if you can only do a jankified 1/2 fill, why not just
provide an optional route that has no legacy issues for people who want to
use it?

I believe even Scott's polyfill doesn't do anything to enable
HTMLButtonElement.call(this);

Hopefully I'm in the ballpark here, but if what I said is wrong or not an
issue, what *is* the reasoning behind it?

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Thu, Feb 14, 2013 at 2:23 PM, Scott Miles sjmi...@google.com wrote:

 MyButton = document.register(‘x-button’, {
   prototype: MyButton.prototype,
   lifecycle: {
  created: MyButton
   }
 });

 What's the benefit of allowing this syntax? I don't immediately see why
 you couldn't just do it the other way.


 On Thu, Feb 14, 2013 at 2:21 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Thu, Feb 14, 2013 at 5:15 PM, Erik Arvidsson a...@chromium.org wrote:

 Yeah, this post does not really talk about syntax. It comes after a
 discussion how we could use ES6 class syntax.

 The ES6 classes have the same semantics as provided in this thread using
 ES5.

 On Thu, Feb 14, 2013 at 5:10 PM, Rick Waldron waldron.r...@gmail.comwrote:


 On Thu, Feb 14, 2013 at 4:48 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:


 MyButton = document.register(‘x-button’, {
   prototype: MyButton.prototype,
   lifecycle: {
  created: MyButton
   }
 });



 Does this actually mean that the second argument has a property called
 prototype that itself has a special meaning?


 This is just a dictionary.



 Is the re-assignment MyButton intentional? I see the original
 MyButton reference as the value of the created property, but then
 document.register's return value is assigned to the same identifier? Maybe
 this was a typo?


 document.register(‘x-button’, {
  constructor: MyButton,
  ...
 });


 Same question as above, but re: constructor?


 Same answer here.

 I'm not happy with these names but I can't think of anything better.


 Fair enough, I trust your judgement here. Thanks for the follow up—always
 appreciated.

 Rick


 --
 erik






Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Daniel Buchner
Ok, I'll take your word that we get basically 1:1 and devs won't need to
recode or do any catch-casing inside constructors or protos for non-native
document.register polyfill use.

Regardless, if we are going to keep the property bag, which provides way
more than just the prototype property, it seems to me that...

document.register('x-super-button', {
constructor: SuperButton,
lifecycle: { ... }
});

...would still be the most concise, ergonomic syntax. Truth is, devs like
property bags. Major JS frameworks commonly use the property object pattern
for the description of new components and modules. Additionally, retaining
the property bag provides freedom to add other registration-centric
options/features at a later date - unlike 20/20 localName check hindsight,
we can *start* by retaining this flexibility now, so that hindsight does
not become not 20/13 ;)

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Thu, Feb 14, 2013 at 2:41 PM, Erik Arvidsson a...@chromium.org wrote:


 On Thu, Feb 14, 2013 at 5:40 PM, Scott Miles sjmi...@google.com wrote:

 In all constructions the *actual* calling of HTMLButtonElement is done by
 the browser.

 All the user has to do is *not* call it, and only call super constructors
 if they are custom.

 For that reason, I don't see why this is an issue.


 Or if you want you can polyfill HTMLButtonElement.call.

 HTMLButtonElement.call = function() {};

 On Thu, Feb 14, 2013 at 2:36 PM, Daniel Buchner dan...@mozilla.comwrote:

 It seems to me (please correct me if this is inaccurate) that you can't
 *really* polyfill ES6 extension of existing element constructor
 inheritance, because afaik, you cannot call the existing native
 constructors of elements - it throws. So if you can only do a jankified 1/2
 fill, why not just provide an optional route that has no legacy issues for
 people who want to use it?

 I believe even Scott's polyfill doesn't do anything to enable
 HTMLButtonElement.call(this);

 Hopefully I'm in the ballpark here, but if what I said is wrong or not
 an issue, what *is* the reasoning behind it?

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Thu, Feb 14, 2013 at 2:23 PM, Scott Miles sjmi...@google.com wrote:

 MyButton = document.register(‘x-button’, {
   prototype: MyButton.prototype,
   lifecycle: {
  created: MyButton
   }
 });

 What's the benefit of allowing this syntax? I don't immediately see why
 you couldn't just do it the other way.


 On Thu, Feb 14, 2013 at 2:21 PM, Rick Waldron 
 waldron.r...@gmail.comwrote:




 On Thu, Feb 14, 2013 at 5:15 PM, Erik Arvidsson a...@chromium.orgwrote:

 Yeah, this post does not really talk about syntax. It comes after a
 discussion how we could use ES6 class syntax.

 The ES6 classes have the same semantics as provided in this thread
 using ES5.

 On Thu, Feb 14, 2013 at 5:10 PM, Rick Waldron waldron.r...@gmail.com
  wrote:


 On Thu, Feb 14, 2013 at 4:48 PM, Dimitri Glazkov 
 dglaz...@google.com wrote:


 MyButton = document.register(‘x-button’, {
   prototype: MyButton.prototype,
   lifecycle: {
  created: MyButton
   }
 });



 Does this actually mean that the second argument has a property
 called prototype that itself has a special meaning?


 This is just a dictionary.



 Is the re-assignment MyButton intentional? I see the original
 MyButton reference as the value of the created property, but then
 document.register's return value is assigned to the same identifier? 
 Maybe
 this was a typo?


 document.register(‘x-button’, {
  constructor: MyButton,
  ...
 });


 Same question as above, but re: constructor?


 Same answer here.

 I'm not happy with these names but I can't think of anything better.


 Fair enough, I trust your judgement here. Thanks for the follow
 up—always appreciated.

 Rick


 --
 erik








 --
 erik





Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Daniel Buchner
The polyfill rabbit hole of half-hearted, faux-ES6 polyfilling of
constructor inheritance seems to be far deeper than both conceptually in
code-level affect than our simple examples show. Further, what is so sexy
about forcing the pattern when we can't, hard stop, no-way, polyfill *class
*and *extends*?

In my mind, you gain widespread adoption of this if the legacy case is
super streamlined - if you tell developers:

Because we forced a constructor pattern, albeit without truly being able
to use class and extends, we hunted and pecked around the DOM and monkey
patched a bunch of things so you can construct one-off, weak sauce variants
of inherited constructors...just to use with this one method...oh, and
don't try to really use ES6 stuff, because we're just faking a small part
of it.

That sounds kinda gross IMO.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Thu, Feb 14, 2013 at 2:53 PM, Scott Miles sjmi...@google.com wrote:




 On Thu, Feb 14, 2013 at 2:48 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Feb 14, 2013 at 2:23 PM, Scott Miles sjmi...@google.com wrote:
  MyButton = document.register(‘x-button’, {
prototype: MyButton.prototype,
lifecycle: {
   created: MyButton
}
  });
 
  What's the benefit of allowing this syntax? I don't immediately see why
 you
  couldn't just do it the other way.

 Daniel answered the direct question, I think,


 I must have missed that.


 but let me see if I
 understand the question hiding behind your question :)

 Why can't we just have one API, since these two are so close already?
 In other words, can we not just use constructor API and return a
 generated constructor?

 Do I get a cookie? :)

 :DG


 Well, yes, here ya go: (o). But I must be missing something. You wouldn't
 propose two APIs if they were equivalent, and I don't see how these are not
 (in any meaningful way).



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Daniel Buchner
No, I believe this is *precisely *the thing to worry about - these nits and
catch-case gotchas are the sort of things developers see in an emerging
API/polyfill and say awe, that looks like an fractured, uncertain hassle,
I'll just wait until it is native in all browsers -- we must avoid this
at all cost, the web needs this *now*.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Thu, Feb 14, 2013 at 3:16 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Feb 14, 2013 at 2:53 PM, Scott Miles sjmi...@google.com wrote:

  Well, yes, here ya go: (o). But I must be missing something. You wouldn't
  propose two APIs if they were equivalent, and I don't see how these are
 not
  (in any meaningful way).

 The only difference is that one spits out a generated constructor, and
 the other just returns a constructor unmodified (well, not in a
 detectable way). My thinking was that if we have both be one and the
 same API, we would have:

 1) problems writing specification in an interoperable way (if you can
 override [[Construct]] function, then do this...)

 2) problems with authors seeing different effects of the API on each
 browser (in Webcko, I get the same object as I passed in, maybe I
 don't need the return value, oh wait, why does it fail in Gekit?)

 Am I worrying about this too much?

 :DG



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Daniel Buchner
What does it actually profit us to singularly tie document.register to
require an ES6-esque syntax before it lands anyway? No one is saying not to
use it *when it arrives*, we're offering a way to make sure the polyfill
layer isn't needlessly bound to inconsequential externalities.

Hell, if you wanted a single API, call the property descriptor (or
something else that's general) and have it take both by checking what kind
of object the value is... ***ducks***
On Feb 14, 2013 5:14 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/14/13 6:03 PM, Dimitri Glazkov wrote:

 Since these are two separate steps, I technically don't _need_ to put
 HTMLButtonElement.call(this) into my element's constructor -- it's a
 sure bet it will just be a useless dummy.


 For HTMLButtonElement, perhaps.  But for HTMLImageElement that's less
 clear.

 -Boris



Re: document.register and ES6

2013-02-07 Thread Daniel Buchner
Scott is right, there isn't a great polyfill answer for this part of the
spec, but fortunately it doesn't affect too potential many use-cases.
Developers will still go bananas for the functionality we can provide in
legacy UA versions.

- Daniel
 On Feb 7, 2013 8:51 PM, Scott Miles sjmi...@google.com wrote:

 Good reminder. On webkit/ff at least, we have made polyfills which can
 generate:

 x-element // for elements derived from HTMLUnknownElement
 button is=x-element // for all other elements

 Since both syntaxes are now (to be) supported by spec, I think we are ok?

 Scott

 P.S. 100% of the custom elements in our current library are based on
 HTMLUnknownElement.

 P.P.S. Arv, do you have a preference from my three versions (or none of
 the above)?

 On Thu, Feb 7, 2013 at 7:16 PM, Erik Arvidsson a...@chromium.org wrote:

 Actually, that reminds me. How is a polyfill supposed to be able to
 create an element with the tag name my-button and still have the instance
 be an HTMLButtonElement? That does not seem possible. I guess a polyfill
 just need to limit the base to HTMLUnknownElement.
 On Feb 7, 2013 8:44 PM, Scott Miles sjmi...@google.com wrote:

 In my excitement for getting something that worked on the Big Three
 (webkit, IE, FF), I inadvertently cheated by adding an extra parameter to
 'document.register'.

 TL;DR version:

 Solutions to the extra parameter problem:

 1. go ahead and have an (optional) extra parameter to document.register

MyButton = function() { ... };
MyButton.prototype = { ... };
MyButton = document.register('x-button', MyButton, 'button');

 2. require user to chain prototypes himself and infer the tagName from
 the prototype

MyButton = function() { ... };
// making prototype requires icky DefineProperty syntax
// on IE the only purpose of chaining the prototype is for tagName
 inference
MyButton.prototype = Object.create(HTMLButtonElement.prototype, { });
MyButton = document.register('x-button', MyButton);

 3. introduce an intermediate method to build the 'class':

   MyButton = function() { ... };
   MyButton.prototype = { ... };
   MyButton = document.extendElement('button', MyButton);
   document.register('x-button', MyButton);

 Right now I'm preferring (3). WDTY?

 ===

 Long version:

 Recall that the goal is to support the simple notion of: make class,
 register to tag. So,

   class MyButton extends HTMLButtonElement...
   document.register('x-button', MyButton);

 My last proposed non-ES6 version had syntax like this:

MyButton = function() { ... };
MyButton.prototype = { ... };
MyButton = document.register('x-button', MyButton, 'button');

 The third parameter serves two purposes: (1) allows register to chain
 the correct prototype (HTMLButtonElement.prototype) to MyButton.prototype
 and (2) allows register to generate an appropriate constructor (i.e. one
 the instantiates a 'button' element).

 Trying to remove the third parameter created some new quandries.

 One avenue is to suggest the developer set up the inheritance chain
 outside of register. This is immediately appealing because it's very close
 to the 'goal syntax'. IOW,

MyButton = function() { ... };
MyButton.prototype = Object.create(HTMLButtonElement.prototype, { });
MyButton = document.register('x-button', MyButton);

 Btw: this form requires inferring the tag name 'button' from
 HTMLButtonElement.prototype, which AFAIK, is not as easy as it sounds.

 There are two potential downsides (1) adding properties to
 MyButton.prototype requires property descriptors (or some helper function
 the user must write), (2) on IE we aren't using the chained prototype
 anyway.

 An alternative would be to introduce another method, something like
 'document.extendElement'. Usage would be like so:

   MyButton = function() { ... };
   MyButton.prototype = { ... };
   MyButton = document.extendElement('button', MyButton);
   document.register('x-button', MyButton);

 Scott


 On Thu, Feb 7, 2013 at 3:15 PM, Dimitri Glazkov dglaz...@google.comwrote:




 On Wed, Feb 6, 2013 at 10:01 AM, Boris Zbarsky bzbar...@mit.eduwrote:

 On 2/6/13 5:07 PM, Erik Arvidsson wrote:

 This refactoring is needed for ES6 anyway so it might be worth looking
 into no matter what.


 Well, yes, but it's a matter of timeframes.  It's incredibly unlikely
 that a complete refactoring of how functions are implemented (which is 
 what
 I was given to understand would be required here) could be done in the 
 next
 several months to a year  I doubt we want to wait that long to do
 document.register.


 This is a valid and important concern. Boris, in your opinion, what is
 the most compatible way to proceed here? I see a couple of options, but
 don't know how difficult they will be implementation-wise:

 1) Expose the ability to override [[Construct]]. Arv tells me that he
 spoke with V8 peeps and they think they can do this fairly easily. How's
 the SpiderMonkey story looking here?

 2) 

Re: document.register and ES6

2013-02-06 Thread Daniel Buchner
On Wed, Feb 6, 2013 at 9:03 AM, Erik Arvidsson a...@chromium.org wrote:

 On Tue, Feb 5, 2013 at 8:26 PM, Daniel Buchner dan...@mozilla.com wrote:
  I have two questions:
 
  Does this affect our ability to polyfill doc.register in current
 browsers?

 Good point. This is really important to us as well so we most likely
 need to tweak this to make sure it will work.

 Do we need to be able to do new MyButton or is
 document.createElement/innerHTML/parser sufficient? If we need to be
 able to do new in the polyfill I think we either need to tweak
 document.register or get the developer to cooperate (by writing
 different code). At this point I don't see how we can tweak the API
 and still fulfill all of the requirements.

  Are you saying we're going to nix the ability to easily register
 insertion,
  removal, and attribute change callbacks from the API?

 No. I don't think there is any change here. Instead of passing in
 functions to document.register we can call methods on the custom
 element. For example:

 class MyButton extends HTMLButtonElement {
   constructor() {
 super();
 this.moreInit();
   }
   handleAttributeChange(name, value) { ... }
   moreInit() { ... }
   ...
 }
 document.register('my-button', MyButton);


Above you say the developer would simply call methods on the custom
element - how do you figure? Are we going to imbue all elements with
magical handleRemoved, handleAttributeChange, etc function handlers that,
when simply defined in the closure, implicitly hook into mutation
observations? I hope this isn't the case, as it would be incredibly obtuse
and extremely magical. If that's not what you mean, please provide a full
example that leaves out none of the boilerplate - I would like to see what
developers are really in for.

You start the following code section below with instead of below, but
nothing below this point accurately represents how the spec currently
behaves


 instead of

 var myButtonPrototype = Object.create(HTMLButtonElement.prototype, {
   handleAttributeChange: {
 value: function(name, value) { ... },
 enumerable: true,
 configurable: true,
 writable: true
   },
   moreInit: {
 value: function() { ... },
 enumerable: true,
 configurable: true,
 writable: true
   },
   ...
 });

 var MyButton = document.register('my-button', {
   prototype: myButtonPrototype,
   created: function(element) {
 element.moreInit();
   },
   attributeChange: function(element, name, value) {
 element.handleAttibuteChange(name, value);
   }
 });


Currently it is:

var MyButton = document.register('my-button', {
lifecycle: {
attributeChanged: function(){ ... }
}
});


Re: document.register and ES6

2013-02-06 Thread Daniel Buchner
Scott: is this example not intended to work in IE9? It throws, the output
object is missing the 'oranginate' method.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Feb 6, 2013 at 12:32 PM, Scott Miles sjmi...@google.com wrote:

 There were several errors in my pseudo-code, here is a working version:

 http://jsfiddle.net/yNbnL/1/

 S


 On Wed, Feb 6, 2013 at 12:01 PM, Scott Miles sjmi...@google.com wrote:

 Errata:
  makePrototypeTwiddlingConstructorForDomNodes needs to know the extendee

  var ctor = makePrototypeTwiddlingConstructorForDomNodes(inExtends,
 inClass);


 On Wed, Feb 6, 2013 at 11:59 AM, Scott Miles sjmi...@google.com wrote:

 On Wed, Feb 6, 2013 at 11:18 AM, Erik Arvidsson a...@chromium.orgwrote:

 On Wed, Feb 6, 2013 at 1:38 PM, Scott Miles sjmi...@google.com wrote:
  Sorry, replace MyButton.super() with MyButton.super.call(this);
 
 
  On Wed, Feb 6, 2013 at 10:37 AM, Scott Miles sjmi...@google.com
 wrote:
 
  So, neglecting issues around the syntax of document.register and the
  privatization of callbacks, is it fair to say the following is the
 intended
  future:
 
  class MyButton extends HTMLButtonElement {
constructor() {
  super();
  // make root, etc.
}
  }
  document.register('x-button', MyButton);
 
  If so then can we do this in the present:
 
  MyButtonImpl = function() {

 What do you mean here?

MyButton.super();

 Did you get that backwards? I don't see how MyButtonImpl can be
 derived from MyButton.


 Its not. The 'super' means 'the super-class constructor for MyButton
 that does not include magic DOM object generation' (in this case,
 HTMLButtonElement). For MyDerivedButton, MyDerivedButton.super would point
 to MyButtonImpl.

 The existence of MyButtonImpl is an unfortunate side-effect of needing a
 generated constructor.

 The idea is to correspond as closely as possible with the ES6
 version. MyButtonImpl goes away in ES6, it's purpose in the meantime is
 just to provide something that looks like a proper class.

 I could write it this way:

 *MyButton = function() {

   MyButton.super();
   // make root, etc.
 };
 MyButton.prototype = Object.create(HTMLButtonElement, { ... });*
 *
 MyButton = document.register(‘x-button’, MyButton);
 *

 Written this way, MyButton no longer refers to the constructor you
 specified, but instead refers to the generated constructor. This is
 conceptually cleaner, but it's a bit tricky. For maximum clarity, I named
 the internal version MyButtonImpl in my example code, but there is no
 reason to have that symbol.



// make root, etc.
  };
  MyButtonImpl.prototype = Object.create(HTMLButtonElement, { ... });
 
  // the ‘real’ constructor comes from document.register
  // register injects ‘super’ into MyButton
  MyButton = document.register(‘x-button’, MyButtonImpl);

 What is the relationship between MyButton and MyButtonImpl?

 If MyButton.__proto__ === MyButtonImpl and
 MyButton.prototype.__proto__ === MyButtonImpl.prototype then this
 might work (but this cannot be polyfilled either).


 MyButton.prototype == MyButtonImpl.prototype or
 MyButton.prototype.__proto__ == MyButtonImpl.prototype, depending on needs.

 MyButton itself does magic DOM construction work that we cannot do with
 normal inheritance, then invokes MyButtonImpl. MyButtonImpl is never used
 as a constructor itself (not as an argument to 'new' anyway).

 From the user's perspective, he has made a single class which implements
 his element (the goal!). The unfortunate name shenanigan (I called my class
 MyButtonImpl, but after 'register' I refer to it as MyButton) is the
 simplest way I could conceive to overcome the 'generated constructor'
 problem.

 To be clear, everything I come up with is intended to polyfill (modulo
 my error), because I generally am writing those myself (at first anyway).
 One version might look like this:

 document.register = function(inExtends, inClass) {
   var ctor = makePrototypeTwiddlingConstructorForDomNodes(inClass);
   ctor.prototype = inClass.prototype;
   addToTagRegistry(inExtends, ctor, inClass);
   ctor.super = getClassForExtendee(inExtends);
   return ctor;
 };


 --
 erik







Re: document.register and ES6

2013-02-06 Thread Daniel Buchner
So you're directly setting the user-added methods on matched elements in
browsers that don't support proto, but what about accessors?

If we modified the spec (as previously suggested) to take an *unbaked*
prototype object, we could polyfill all property types:

var myButton = document.register('x-mybutton', {
prototype: {
foo: {
set: function(){ ... },
get: function(){ ... }
}
}
});

Equipped with the unbaked prototype descriptor, in your upgrade phase, you
should be able to simply bake the node with:
Object.defineProperties(element, unbakedPrototypeDescriptor) - right?

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Feb 6, 2013 at 1:07 PM, Scott Miles sjmi...@google.com wrote:

 Well, this (non-robust quicky test) works in IE:

 http://jsfiddle.net/zUzCx/1/


 On Wed, Feb 6, 2013 at 12:59 PM, Scott Miles sjmi...@google.com wrote:

 Afaik, the 'generated constructor' is technical debt we are stuck with
 until we can actually invoke DOM constructors from JS. If there is a better
 way around it, I'm all ears!

 polyfilling without __proto__: I don't know if it's possible, which is a
 good point. I was basically ignoring that problem, but I guess I should not
 do that: we may have to utterly change our target.

 Iow, perhaps we can decorate node instances with API scraped off of a
 separate prototype chain. Ironically, this is close to what my component
 sugaring layer does anyway, in order to support protected API.

 S


 On Wed, Feb 6, 2013 at 12:50 PM, Erik Arvidsson a...@chromium.org wrote:

 If we are willing to return a new constructor function I think we have
 no problems. I was concerned that it would lead to people using the
 wrong function but it does solve the issues.

 class MyButtonImpl extends HTMLButtonElement {
 }
 let MyButton = document.register('my-button', {
   class: MyButtonImpl // maybe call the property implementation if
 we don't want to use class.
 });

 I feel like this is getting close to my pain tolerance for boilerplate
 code but I'm willing to realize that currently this is the only
 working proposal (for polyfilling in __proto__ browser).

 I'm still curious how people are planning to do this in non __proto__
 browsers?

 On Wed, Feb 6, 2013 at 3:43 PM, Scott Miles sjmi...@google.com wrote:
  Yes, it's not intended to work in IE ... I used __proto__.
 
 
  On Wed, Feb 6, 2013 at 12:41 PM, Daniel Buchner dan...@mozilla.com
 wrote:
 
  Scott: is this example not intended to work in IE9? It throws, the
 output
  object is missing the 'oranginate' method.
 
  Daniel J. Buchner
  Product Manager, Developer Ecosystem
  Mozilla Corporation
 
 
  On Wed, Feb 6, 2013 at 12:32 PM, Scott Miles sjmi...@google.com
 wrote:
 
  There were several errors in my pseudo-code, here is a working
 version:
 
  http://jsfiddle.net/yNbnL/1/
 
  S
 
 
  On Wed, Feb 6, 2013 at 12:01 PM, Scott Miles sjmi...@google.com
 wrote:
 
  Errata:
   makePrototypeTwiddlingConstructorForDomNodes needs to know the
 extendee
 
  var ctor = makePrototypeTwiddlingConstructorForDomNodes(inExtends,
  inClass);
 
 
  On Wed, Feb 6, 2013 at 11:59 AM, Scott Miles sjmi...@google.com
 wrote:
 
  On Wed, Feb 6, 2013 at 11:18 AM, Erik Arvidsson a...@chromium.org
  wrote:
 
  On Wed, Feb 6, 2013 at 1:38 PM, Scott Miles sjmi...@google.com
  wrote:
   Sorry, replace MyButton.super() with MyButton.super.call(this);
  
  
   On Wed, Feb 6, 2013 at 10:37 AM, Scott Miles 
 sjmi...@google.com
   wrote:
  
   So, neglecting issues around the syntax of document.register
 and
   the
   privatization of callbacks, is it fair to say the following is
 the
   intended
   future:
  
   class MyButton extends HTMLButtonElement {
 constructor() {
   super();
   // make root, etc.
 }
   }
   document.register('x-button', MyButton);
  
   If so then can we do this in the present:
  
   MyButtonImpl = function() {
 
  What do you mean here?
 
 MyButton.super();
 
  Did you get that backwards? I don't see how MyButtonImpl can be
  derived from MyButton.
 
 
  Its not. The 'super' means 'the super-class constructor for
 MyButton
  that does not include magic DOM object generation' (in this case,
  HTMLButtonElement). For MyDerivedButton, MyDerivedButton.super
 would point
  to MyButtonImpl.
 
  The existence of MyButtonImpl is an unfortunate side-effect of
 needing
  a generated constructor.
 
  The idea is to correspond as closely as possible with the ES6
 version.
  MyButtonImpl goes away in ES6, it's purpose in the meantime is
 just to
  provide something that looks like a proper class.
 
  I could write it this way:
 
  MyButton = function() {
 
   MyButton.super();
   // make root, etc.
  };
  MyButton.prototype = Object.create(HTMLButtonElement, { ... });
  MyButton = document.register(‘x-button’, MyButton);
 
  Written this way, MyButton no longer refers to the constructor you
  specified, but instead refers to the generated constructor

Re: document.register and ES6

2013-02-06 Thread Daniel Buchner
I just made sure it worked, and it does. As for developers freaking out, I
really don't believe they would. If that was the case,
Object.defineProperties should be causing a global pandemic of
whopperdeveloper freakouts (
http://www.youtube.com/watch?v=IhF6Kr4ITNQ).

This would give us easy IE compat for the whole range of property types,
and I'm willing to all but guarantee developers will have a bigger freakout
about not having IE9 support than the prototype property of
document.register taking both a baked and unbaked object.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Feb 6, 2013 at 1:34 PM, Scott Miles sjmi...@google.com wrote:

 On Wed, Feb 6, 2013 at 1:27 PM, Daniel Buchner dan...@mozilla.com wrote:

 So you're directly setting the user-added methods on matched elements in
 browsers that don't support proto, but what about accessors?


 I believe those can be forwarded too, I just didn't bother in my fiddle.


 Equipped with the unbaked prototype descriptor, in your upgrade phase,
 you should be able to simply bake the node with:
 Object.defineProperties(element, unbakedPrototypeDescriptor) - right?


 Yes, but I believe developers would freak out if we required them to
 provide that type of descriptor (I would).

  snip



Re: document.register and ES6

2013-02-06 Thread Daniel Buchner
I guess it isn't a show stopper for poly-*ish*-fills, I would just wrap the
native document.register method where it is present  sniff the incoming
prototype property value to detect whether it was baked  cache the unbaked
prototype  then pass a baked one to the native method.

Of course this means we'll (I'll) be evangelizing a polyfill with a
slightly augmented wrapper for taking unbaked objects, but for IE
compatibility devs will probably offer their first born, so I doubt they'll
bat an eye at such a benign incongruity.

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Feb 6, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

 Remember where we started: absurdly clean ES6 class syntax.

 Requiring class definition class using property descriptors is a radical
 march in the other direction.

 I'm hardcore about syntactical tidiness. The reason I'm not freaking out
 about defineProperties is IMO because I can avoid it when I don't need it
 (which is about 99% of the time).

 Scott


 On Wed, Feb 6, 2013 at 1:50 PM, Daniel Buchner dan...@mozilla.com wrote:

 I just made sure it worked, and it does. As for developers freaking out,
 I really don't believe they would. If that was the case,
 Object.defineProperties should be causing a global pandemic of 
 whopperdeveloper freakouts (
 http://www.youtube.com/watch?v=IhF6Kr4ITNQ).

 This would give us easy IE compat for the whole range of property types,
 and I'm willing to all but guarantee developers will have a bigger freakout
 about not having IE9 support than the prototype property of
 document.register taking both a baked and unbaked object.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Wed, Feb 6, 2013 at 1:34 PM, Scott Miles sjmi...@google.com wrote:

 On Wed, Feb 6, 2013 at 1:27 PM, Daniel Buchner dan...@mozilla.comwrote:

 So you're directly setting the user-added methods on matched elements
 in browsers that don't support proto, but what about accessors?


 I believe those can be forwarded too, I just didn't bother in my fiddle.


 Equipped with the unbaked prototype descriptor, in your upgrade phase,
 you should be able to simply bake the node with:
 Object.defineProperties(element, unbakedPrototypeDescriptor) - right?


 Yes, but I believe developers would freak out if we required them to
 provide that type of descriptor (I would).

  snip






Re: document.register and ES6

2013-02-06 Thread Daniel Buchner
Short of running Object.getOwnPropertyNames on the existing node  then
iterating over each to grab the property descriptor with
Object.getOwnPropertyDescriptor to rebuild an unbaked object  and finally
setting the properties with Object.setProperties, I am unaware of how to do
so - is there an easier way? If so I would love to not do the above or go
the unbaked object allowance wrapper route :)

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Wed, Feb 6, 2013 at 2:28 PM, Scott Miles sjmi...@google.com wrote:

 Seems like you decided that descriptor syntax is *necessary* for IE
 compatibility. I'm 80% sure it is not.

 S


 On Wed, Feb 6, 2013 at 2:10 PM, Daniel Buchner dan...@mozilla.com wrote:

 I guess it isn't a show stopper for poly-*ish*-fills, I would just wrap
 the native document.register method where it is present  sniff the
 incoming prototype property value to detect whether it was baked  cache
 the unbaked prototype  then pass a baked one to the native method.

 Of course this means we'll (I'll) be evangelizing a polyfill with a
 slightly augmented wrapper for taking unbaked objects, but for IE
 compatibility devs will probably offer their first born, so I doubt they'll
 bat an eye at such a benign incongruity.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Wed, Feb 6, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

 Remember where we started: absurdly clean ES6 class syntax.

 Requiring class definition class using property descriptors is a radical
 march in the other direction.

 I'm hardcore about syntactical tidiness. The reason I'm not freaking out
 about defineProperties is IMO because I can avoid it when I don't need it
 (which is about 99% of the time).

 Scott


 On Wed, Feb 6, 2013 at 1:50 PM, Daniel Buchner dan...@mozilla.comwrote:

 I just made sure it worked, and it does. As for developers freaking
 out, I really don't believe they would. If that was the case,
 Object.defineProperties should be causing a global pandemic of 
 whopperdeveloper freakouts (
 http://www.youtube.com/watch?v=IhF6Kr4ITNQ).

 This would give us easy IE compat for the whole range of property
 types, and I'm willing to all but guarantee developers will have a bigger
 freakout about not having IE9 support than the prototype property of
 document.register taking both a baked and unbaked object.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Wed, Feb 6, 2013 at 1:34 PM, Scott Miles sjmi...@google.com wrote:

 On Wed, Feb 6, 2013 at 1:27 PM, Daniel Buchner dan...@mozilla.comwrote:

 So you're directly setting the user-added methods on matched elements
 in browsers that don't support proto, but what about accessors?


 I believe those can be forwarded too, I just didn't bother in my
 fiddle.


 Equipped with the unbaked prototype descriptor, in your upgrade
 phase, you should be able to simply bake the node with:
 Object.defineProperties(element, unbakedPrototypeDescriptor) - right?


 Yes, but I believe developers would freak out if we required them to
 provide that type of descriptor (I would).

  snip








Re: document.register and ES6

2013-02-05 Thread Daniel Buchner
I have two questions:

   1. Does this affect our ability to polyfill doc.register in current
   browsers?
   2. Are you saying we're going to nix the ability to easily register
   insertion, removal, and attribute change callbacks from the API?

I believe #2 is very important and should not be discarded, while #1 is
simply a deal breaker. Can you explain how your proposal accounts for these
concerns?

- Daniel

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Tue, Feb 5, 2013 at 2:12 PM, Erik Arvidsson a...@chromium.org wrote:

 The way document.register is currently proposed makes it
 future-hostile to ES6. I've heard several people from different
 organizations say that this is a blocking issue.

 Over the last couple of days we (me, Dimitri and others) have worked
 on some alterations to the current spec proposal. The discussion got
 pretty extensive so I'll try to summarize the main points.

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=20831

 With ES6 we really want to be able to write code like this:

 class MyButton extends HTMLButtonElement {
   ...
 }
 document.register('x-button', MyButton);

 In ES6 speak, we have split the new Foo(...args) expression into
 Foo.call(Foo[@@create](), ...args) which means that creating the
 instance has been separated from the call to the function. This allows
 us to subclass Array etc. It also opens up possibilities to subclass
 Element. All Element need is a @@create method that creates the
 instance (and sets the internal pointer to the underlying C++ object
 like we do today). For custom elements we can therefore generate the
 @@create method for the function passed to document.register. This
 function would create an instance in the same way as previously speced
 at
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-custom-element-instantiation

 == What about ES5/3? ==

 In ES5/3 speak we don't have the @@create refactoring but we do have
 an internal [[Construct]] method. In the reformed version of
 document.create we can override [[Construct]] of the passed in
 function to create the instance and then do the [[Call]].

 This also means API change from what is currently specified. Instead of

 document.register(‘x-button’, { prototype:
 Object.create(HTMLButtonElement.prototype, {
   ...
 });

 We will have:

 function MyButton() {
   HTMLButtonElement.call(this);
   // yay, I can have a real constructor!
   // do constructor stuff ...
 }
 MyButton.prototype = Object.create(HTMLButtonElement.prototype, {
   ...
 });
 document.register(‘x-button’, MyButton);

 We think it’s much better, because:
 a) we no longer have to spit out some magic generated constructor
 b) we can let developers have a constructor in their custom element
 c) there will be no API changes when ES6 classes arrive
 d) there is no longer a need for crazy callbacks “created” and
 “shadowRootCreated”, because they can just be code in the constructor

 == Does this mean that the user code now runs while parsing? ==

 We’ve heard in the past that allowing user code execution while the
 parser is building a tree is undesirable due to performance and
 specific design issues. However, now that the custom element
 constructor is no longer generated, it may appear as if the
 user-specified constructor would run when each element is
 instantiated.

 We intend to address this as follows:
 * When the parser builds a tree, it only creates underlying C++ objects
 * Just before entering script,
 * we first instantiate all custom elements (think Object.create, but
 with all the baggage of being a wrapper around a C++ object), so that
 they all have the right prototype chains, in tree order
 * then, we invoke the respective internal [[Call]] methods of all
 custom elements, in tree order

 How does template fit into this?

 The dependencies on template and shadow DOM are now removed from
 document.register API. If people want to use a template and shadow DOM
 they can easily do this in code:

 class MyButton extends HTMLButtonElement {
   constructor() {
 super();
 var template = ...
 var shadowRoot = this.createShadowRoot();
 shadowRoot.appendChild(template.content.cloneNode(true));
   }
 }
 document.register('x-button', MyButton);


 --
 erik



Re: document.register and ES6

2013-02-05 Thread Daniel Buchner
*
So this won't work?*

var MyButton = document.register(‘x-mybutton’, {
prototype: Object.create(HTMLButtonElement.prototype, { ... })
});
class MySuperButton extends MyButton { ... };
document.register('x-superbutton', MySuperButton);

*But this will?*

function MyButton() {
  HTMLButtonElement.call(this);
  // yay, I can have a real constructor!
  // do constructor stuff ...
}
MyButton.prototype = Object.create(HTMLButtonElement.prototype, {
  ...
});
document.register(‘x-button’, MyButton);

Can someone reply with a terse pro/con/gain/loss list that identifies what
parts of the current API (and its behavior) this proposal affects?

Daniel J. Buchner
Product Manager, Developer Ecosystem
Mozilla Corporation


On Tue, Feb 5, 2013 at 2:12 PM, Erik Arvidsson a...@chromium.org wrote:

 The way document.register is currently proposed makes it
 future-hostile to ES6. I've heard several people from different
 organizations say that this is a blocking issue.

 Over the last couple of days we (me, Dimitri and others) have worked
 on some alterations to the current spec proposal. The discussion got
 pretty extensive so I'll try to summarize the main points.

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=20831

 With ES6 we really want to be able to write code like this:

 class MyButton extends HTMLButtonElement {
   ...
 }
 document.register('x-button', MyButton);

 In ES6 speak, we have split the new Foo(...args) expression into
 Foo.call(Foo[@@create](), ...args) which means that creating the
 instance has been separated from the call to the function. This allows
 us to subclass Array etc. It also opens up possibilities to subclass
 Element. All Element need is a @@create method that creates the
 instance (and sets the internal pointer to the underlying C++ object
 like we do today). For custom elements we can therefore generate the
 @@create method for the function passed to document.register. This
 function would create an instance in the same way as previously speced
 at
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-custom-element-instantiation

 == What about ES5/3? ==

 In ES5/3 speak we don't have the @@create refactoring but we do have
 an internal [[Construct]] method. In the reformed version of
 document.create we can override [[Construct]] of the passed in
 function to create the instance and then do the [[Call]].

 This also means API change from what is currently specified. Instead of

 document.register(‘x-button’, { prototype:
 Object.create(HTMLButtonElement.prototype, {
   ...
 });

 We will have:

 function MyButton() {
   HTMLButtonElement.call(this);
   // yay, I can have a real constructor!
   // do constructor stuff ...
 }
 MyButton.prototype = Object.create(HTMLButtonElement.prototype, {
   ...
 });
 document.register(‘x-button’, MyButton);

 We think it’s much better, because:
 a) we no longer have to spit out some magic generated constructor
 b) we can let developers have a constructor in their custom element
 c) there will be no API changes when ES6 classes arrive
 d) there is no longer a need for crazy callbacks “created” and
 “shadowRootCreated”, because they can just be code in the constructor

 == Does this mean that the user code now runs while parsing? ==

 We’ve heard in the past that allowing user code execution while the
 parser is building a tree is undesirable due to performance and
 specific design issues. However, now that the custom element
 constructor is no longer generated, it may appear as if the
 user-specified constructor would run when each element is
 instantiated.

 We intend to address this as follows:
 * When the parser builds a tree, it only creates underlying C++ objects
 * Just before entering script,
 * we first instantiate all custom elements (think Object.create, but
 with all the baggage of being a wrapper around a C++ object), so that
 they all have the right prototype chains, in tree order
 * then, we invoke the respective internal [[Call]] methods of all
 custom elements, in tree order

 How does template fit into this?

 The dependencies on template and shadow DOM are now removed from
 document.register API. If people want to use a template and shadow DOM
 they can easily do this in code:

 class MyButton extends HTMLButtonElement {
   constructor() {
 super();
 var template = ...
 var shadowRoot = this.createShadowRoot();
 shadowRoot.appendChild(template.content.cloneNode(true));
   }
 }
 document.register('x-button', MyButton);


 --
 erik