Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Anne van Kesteren
On Thu, Mar 7, 2013 at 11:25 PM, Dimitri Glazkov dglaz...@google.com wrote:
 Please look over it. I look forward to your eagle-eyed insights in the
 form of bugs and emails.

You try to monkey patch the obtain algorithm but in doing so you
invoke a different fetch algorithm. One which does not expose
resources as CORS-cross-origin. Also, for rel=component tainted
resources make no sense, so we should only use No CORS in
combination with fail.

Why is Component not simply a subclass of Document? If you already
have a Document object you might as well use that directly...

Also, it sounds like this specification should be titled Fetching
components or some such as that's about all it defines. Can't we just
put all the component stuff in one specification? I find the whole
organization quite confusing.


-- 
http://annevankesteren.nl/



Re: File API: Blob.type

2013-03-08 Thread Anne van Kesteren
On Thu, Mar 7, 2013 at 6:35 PM, Arun Ranganathan
aranganat...@mozilla.com wrote:
 But I'm not sure about why we'd choose ByteString in lieu of being strict
 with what characters are allowed within DOMString.  Anne, can you shed some
 light on this?  And of course we should eliminate CR + LF as a possibility
 at constructor invocation time, possibly by throwing.

MIME/HTTP consists of byte sequences, not code points. ByteString is a
basic JavaScript string with certain restrictions on it to match the
byte sequence semantics, while still behaving like a string.


-- 
http://annevankesteren.nl/



Re: File API for Review

2013-03-08 Thread Anne van Kesteren
On Thu, Mar 7, 2013 at 9:09 PM, Arun Ranganathan
aranganat...@mozilla.com wrote:
 I'm also not convinced that leaving what exactly to return in the
 HTTP scenario open to implementors is a good thing. We've been through
 such things before and learned that handwaving is bad. Lets just pick
 something.

 Just to be clear, are you referring to the 500 Error Condition for Blob URLs?
 If so, the only handwaving is about the text of the error message.  I'm happy 
 to tighten even this.

So what I actually think we should do here is treat this as a network
error. XMLHttpRequest already knows about that concept and every other
end point also deals with network errors in a predictable and
standardized way. Phrasing such as Act as if a network error
occurred seems sufficient for now (until Fetch provides hooks).


 Right now, the specification encourages user agents to get encoding from:

 1. The encoding parameter supplied with the readAsText.
 2. A byte order detection heuristic, if 1. is missing.
 3. The charset component of Blob.type, if provided and if 1. and 2. yield no 
 result.
 4. Just use utf-8 if 1, 2, and 3 yield no result.

 Under the encoding spec., it returns failure if encoding isn't valid, and it 
 returns
 failure if the BOM check fails.  So should the spec. say something about 
 throwing?

So I think the decoding part of readAsText() should become something
like this (assuming the argument to readAsText() is renamed to label):

1. Let /encoding/ be null.

2. If /label/ is given, set /encoding/ to the result of geting an
encoding (Encoding Standard) for /label/.

3. If /encoding/ is failure, throw a TypeError.

4. If /encoding/ is null, get an encoding from Blob.type (not sure
where this would be defined), and if that does not return failure, set
/encoding/ to the result.

5. If /encoding/ is null, set /encoding/ to utf-8.

6. Decode (Encoding Standard) the /byte stream/ (or whatever this is
called) using fallback encoding /encoding/.

If throwing above is something implementations do not wish to do, we
should change that to simply ignoring the argument if get an
encoding returns failure.


-- 
http://annevankesteren.nl/



Re: File API for Review

2013-03-08 Thread Henri Sivonen
Additionally, I think http://www.w3.org/TR/FileAPI/#dfn-type should
clarify that the browser must not use statistical methods to guess the
charset parameter part of the type as part of determining the type.
Firefox currently asks magic 8-ball, but the magic 8-ball is broken.
AFAICT, WebKit does not guess, so I hope it's possible to remove the
guessing from Firefox.

(The guessing in Firefox relies on a big chunk of legacy code that's
broken and shows no signs of ever getting fixed properly. The File API
is currently the only thing in Firefox that exposes the mysterious
behavior of said legacy code to the Web using the default settings of
Firefox, so I'm hoping to remove the big chunk of legacy code instead
of fixing it properly.)

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/



[admin] Yves Lafon replaces Doug Schepers as Team Contact

2013-03-08 Thread Arthur Barstow

Hi All,

We just wanted to inform you that Yves Lafon (yla...@w3.org) is WebApps' 
new Team Contact.


Doug - thanks for your previous work in WebApps and good luck in your 
new endeavors, especially webplatform.org.


Yves - welcome to the group. For those that don't know Yves, he has been 
on the W3C staff for many years and has expertise in several areas. My 
first interactions with Yves were at the beginning of this century [when 
most WebApps members were still in grammar school ;-)] when I was 
writing some Jigsaw-based servlets.  (See 
http://www.w3.org/People/Lafon/ for some information.)


-Regards, ArtB and Chaals



Re: Streams and Blobs

2013-03-08 Thread Glenn Maynard
On Thu, Mar 7, 2013 at 9:40 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Mar 7, 2013 at 4:42 PM, Glenn Maynard gl...@zewt.org wrote:
  The alternative argument is that XHR should represent the data source,
  reading data from the network and pushing it to Stream.

 I think this is the approach I'd take. At least in Gecko this would
 allow the XHR code to generally do the same thing it does today with
 regards to actions taken on incoming network data. The only thing we'd
 do differently is which consumer to send the data to. We already have
 several such consumers which are used to implement the different
 .responseType modes, so adding another one fits right in with that
 model.


But what about the issues I mentioned (you snipped them)?  We would be
introducing overlap between XHR and every consumer of URLs
(HTMLImageElement, HTMLVideoElement, CSS loads, CSS subresources, other
XHRs), which could each mean all kinds of potential script-visible interop
subtleties.

Some more issues:

- What happens if you do a sync XHR?  It would block forever, since you'll
never see the Stream in time to hook it up to a consumer.  You don't want
to just disallow this, since then you can't set up streams synchronously at
all.  With the XHR finishes immediately model, this is straightforward:
XHR returns as soon as the headers are finished, giving you the Stream to
do whatever you need with.

- What if you create an async XHR, then hook it up to a sync XHR?  Async
XHR only does work during the event loop, so this would deadlock (the async
XHR would never run to feed data to the sync one).

- You could set up an async XHR in one worker, then read it synchronously
with XHR in another worker.  This means the first worker could block the
second worker at will, eg. by running a blocking operation during an
onprogress event, to prevent returning to the event loop.  I'm sure we
don't want to allow that (at least without careful thought, eg. the
synchronous messaging idea).


From an author point of view it also means that the XHR object behaves
 consistently for all .responseTypes. I.e. the same set of events are
 fired and the XHR object goes through the same set of states. The only
 difference is in how the data is consumed.


It would be less consistent, not more.

With the supply-the-stream-and-it's-done model, XHR follows the same model
it normally does: you start a request, XHR does some work, and onload is
fired once the result is ready for you to use.

With the runs-for-the-duration-of-the-stream model, when is the .response
available?  You can't wait for onload (where it normally becomes
available), because that wouldn't happen until the stream is finished.  The
author has to listen for readystatechange and check for the LOADING state,
which is inconsistent with most of XHR.  (Apparently text works this way
too, but that's an incremental response, not a one-time event.)

-- 
Glenn Maynard


Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Dimitri Glazkov
On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com wrote:
 Currently, if I document.register something, it's my job to supply a
 complete prototype.

 For HTMLElementElement on the other hand, I supply a tag name to extend, and
 the prototype containing the extensions, and the system works out the
 complete prototype.

 However, this ability of HTMLElementElement to construct a complete
 prototype from a tag-name is not provided by any imperative API.

 As I see it, there are three main choices:

 1. HTMLElementElement is recast as a declarative form of document.register,
 in which case it would have no 'extends' attribute, and you need to make
 your own (complete) prototype.

 2. We make a new API for 'construct prototype from a tag-name to extend and
 a set of extensions'.

 3. Make document.register work like HTMLElementElement does now (it takes a
 tag-name and partial prototype).

4. Let declarative syntax be a superset of the imperative API.

Can you help me understand why you feel that imperative and
declarative approaches must mirror each other exactly?

:DG



Re: [webcomponents]: Moving custom element callbacks to prototype/instance

2013-03-08 Thread Dimitri Glazkov
On Wed, Mar 6, 2013 at 1:55 PM, Dimitri Glazkov dglaz...@google.com wrote:

 Cons:
 * The callbacks now hang out in the wind as prototype members. Foolish
 people can invoke them, inspectors show them, etc.

This con could get uncomfortably exciting if we try building HTML
elements with custom elements. For example, today all WebKit forms
controls use the equivalent of the insertedCallback to hook up with
the form element. We could make these callbacks non-configurable so
that enthusiastic authors don't get any ideas, but these same authors
could still call them and wreak all kinds of havoc.

:DG



Re: [webcomponents]: Moving custom element callbacks to prototype/instance

2013-03-08 Thread Dimitri Glazkov
On Wed, Mar 6, 2013 at 4:26 PM, Blake Kaplan mrb...@gmail.com wrote:
 On Wed, Mar 6, 2013 at 1:55 PM, Dimitri Glazkov dglaz...@google.com wrote:
 1) Somehow magically chain create callbacks. In Lucy's case,
 foo-lucy will call both Raj's and Lucy's callbacks.
 2) Get rid of a separate lifecycle object and just put the callbacks
 on the prototype object, similar to printCallback
 (http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Jan/0259.html)

 I am leaning toward the second solution, but wanted to get your opinions.

 I also like the second solution, but Hajime's point about the
 mutability and general exposure of the lifecycle methods is a good
 one. Is there a motivation for having the lifecycle objects on the
 prototype as opposed to being passed in as an ancestor parameter?
 XBL1, as I understand it, automatically calls the
 constructor/destructor of extended bindings, but given the ad hoc
 nature of web components' inheritance, it seems like it would be much
 less surprising to make this stuff explicit *somewhere* (i.e. in the
 actual components rather than in the engine).

I think the idea of placing callbacks on the prototype was the
simplest to do, given that prototype inheritance provides all the
right machinery. If we deem this being a terrible idea, we'll need to
fallback to something like passing ancestor. But then we'll be going
against the grain of JS.

:DG



Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Erik Arvidsson
If you have a tag name it is easy to get the prototype.

var tmp = elementElement.ownerDocument.createElement(tagName);
var prototype = Object.getPrototypeOf(tmp);

On Fri, Mar 8, 2013 at 12:16 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com wrote:
 Currently, if I document.register something, it's my job to supply a
 complete prototype.

 For HTMLElementElement on the other hand, I supply a tag name to extend, and
 the prototype containing the extensions, and the system works out the
 complete prototype.

 However, this ability of HTMLElementElement to construct a complete
 prototype from a tag-name is not provided by any imperative API.

 As I see it, there are three main choices:

 1. HTMLElementElement is recast as a declarative form of document.register,
 in which case it would have no 'extends' attribute, and you need to make
 your own (complete) prototype.

 2. We make a new API for 'construct prototype from a tag-name to extend and
 a set of extensions'.

 3. Make document.register work like HTMLElementElement does now (it takes a
 tag-name and partial prototype).

 4. Let declarative syntax be a superset of the imperative API.

 Can you help me understand why you feel that imperative and
 declarative approaches must mirror each other exactly?

 :DG




--
erik



Re: Web Storage's Normative References and PR / REC

2013-03-08 Thread Ian Hickson
On Thu, 7 Mar 2013, Philippe Le Hegaret wrote:
 
 The goal is to demonstrate that the materials referenced are stable and 
 any change to those references won't have an impact on the 
 recommendations.

What do you mean by stable? If we find something wrong with a REC, we 
still need to change it, since otherwise browsers are going to implement 
things that are wrong... (e.g. anyone implementing HTML4 now is going to 
be in a world of trouble because HTML4 has all kinds of mistakes in it, 
despite being a REC -- HTML4 is not stable at all.)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-08 Thread Dave Methvin
On Thu, Mar 7, 2013 at 2:02 PM, Boris Zbarsky bzbar...@mit.edu

 But you want to continue linking to the version hosted on the Disqus
 server instead of hosting it yourself and modifying as desired, presumably?

 Because if you're hosting yourself you can certainly just make a slight
 modification to opt into not hiding the implementation if you want, right?


Yeah, I actually do want to use their copy. It's similar to monkey-patching
and I'd argue less fragile than making my own copy that is destined to
always be stale. I use a Chrome plugin for GMail called GMelius that does
something similar, it just tweaks parts of the Gmail UI.


Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Anne van Kesteren
On Fri, Mar 8, 2013 at 6:03 PM, Dimitri Glazkov dglaz...@google.com wrote:
 I just mirrored LinkStyle
 (http://dev.w3.org/csswg/cssom/#the-linkstyle-interface) here. Given
 that  document already has URL, you're right -- I don't need the
 Component interface at all. LinkComponent could just have a content
 attribute that returns Document. Also, there's no need for
 sub-classing anything. Components are just documents.

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=21225

If you still want to point to the embedding element though you'll need
to subclass Document in some way, but maybe that is not needed for
now.


 Also, it sounds like this specification should be titled Fetching
 components or some such as that's about all it defines. Can't we just
 put all the component stuff in one specification? I find the whole
 organization quite confusing.

 Components don't directly correlate with custom elements. They are
 just documents that you can load together with your document. With
 things like multi-threaded parser, these are useful on their own, even
 without custom elements.

Because they don't have an associated browsing context? What other use
case are you describing here? That seems like a potential problem by
the way. That subresources from such a document such as img will not
load because there's no associated browsing context.


-- 
http://annevankesteren.nl/



Re: The need to re-subscribe to requestAnimationFrame

2013-03-08 Thread Jonas Sicking
On Mar 2, 2013 6:32 AM, Florian Bösch pya...@gmail.com wrote:

 You can also wrap your own requestAnimationFrameInterval like so:

 var requestAnimationFrameInterval = function(callback){
   var runner = function(){
 callback();
 requestAnimationFrame(runner);
   };
   runner();
 }

 This will still stop if there's an exception thrown by callback, but it
lets you write a cleaner invocation like so:

 requestAnimationFrameInterval(function(){
   // do stuff
 });

 It does not give you a way to stop that interval (except throwing an
exception), but you can add your own if you're so inclined.

 Notably, you could not flexibly emulate requestAnimationFrame (single)
via requestAnimationFrameInterval, so if you're gonna pick one semantic to
implement, it's the former rather than the latter.

For what it's worth, this would have been another (maybe better) way to
address the concern that current spec tries to solve by requiring
reregistration.

I.e. we could have defined a

id = requestAnimationFrameInterval(callback)
cancelAnimationFrameInterval(id)

Set of functions which automatically cancel the interval if an exception is
thrown.

That reduces the current risk that people write code that reregister at the
top, and then has a bug further down which causes an exception to be thrown.

/ Jonas



 On Sat, Mar 2, 2013 at 3:15 PM, Glenn Maynard gl...@zewt.org wrote:

 On Sat, Mar 2, 2013 at 5:03 AM, David Bruant bruan...@gmail.com wrote:

 If someone wants to reuse the same function for
requestionAnimationFrame, he/she has to go through:
 requestAnimationFrame(function f(){
 requestAnimationFrame(f);
 // do stuff
 })


 FYI, this pattern is cleaner, so you only have to call
requestAnimationFrame in one place:

 function draw() {
 // render
 requestAnimationFrame(draw);
 }
 draw();

 --
 Glenn Maynard




Re: [webcomponents]: What callbacks do custom elements need?

2013-03-08 Thread Jonas Sicking
On Mar 6, 2013 2:07 PM, Dimitri Glazkov dglaz...@google.com wrote:

 Here are all the callbacks that we could think of:

 * readyCallback (artist formerly known as create) -- called when the
 element is instantiated with generated constructor, createElement/NS
 or shortly after it was instantiated and placed in a tree during
 parser tree construction

 * attributeChangedCallback -- synchronously called when an attribute
 of an element is added, removed, or modified

This will have many of the same problems that mutation events had. I
believe we want to really stay away from synchronous.

So yes, this looks dangerous and crazy :-)

/ Jonas


Re: Web Storage's Normative References and PR / REC

2013-03-08 Thread Philippe Le Hegaret
On Fri, 2013-03-08 at 18:23 +, Ian Hickson wrote:
 On Thu, 7 Mar 2013, Philippe Le Hegaret wrote:
  
  The goal is to demonstrate that the materials referenced are stable and 
  any change to those references won't have an impact on the 
  recommendations.
 
 What do you mean by stable?

If a specification is using an external feature that has been around for
a while, without substantial changes to its name or definition and with
implementation, the likelihood that it will change is drastically lower.
As such, it is considered more stable than a feature with no
implementation or with a definition that changes every 6 months. So
anything that can help mitigate the risk with regards to change to other
specifications is helpful for the evaluation.

  If we find something wrong with a REC, we 
 still need to change it, since otherwise browsers are going to implement 
 things that are wrong...

I agree, and we should look into adding tests as well.

Philippe





Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Scott Miles
Mostly it's cognitive dissonance. It will be easy to trip over the fact
that both things involve a user-supplied prototype, but they are required
to be critically different objects.

Also it's hard for me to justify why this difference should exist. If the
idea is that element provides extra convenience, then why not make the
imperative form convenient? If it's important to be able to do your own
prototype marshaling, then won't this feature be missed in declarative form?

I'm wary of defanging the declarative form completely. But I guess I want
to break it down first before we build it up, if that makes any sense.

Scott



On Fri, Mar 8, 2013 at 9:55 AM, Erik Arvidsson a...@chromium.org wrote:

 If you have a tag name it is easy to get the prototype.

 var tmp = elementElement.ownerDocument.createElement(tagName);
 var prototype = Object.getPrototypeOf(tmp);

 On Fri, Mar 8, 2013 at 12:16 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com wrote:
  Currently, if I document.register something, it's my job to supply a
  complete prototype.
 
  For HTMLElementElement on the other hand, I supply a tag name to
 extend, and
  the prototype containing the extensions, and the system works out the
  complete prototype.
 
  However, this ability of HTMLElementElement to construct a complete
  prototype from a tag-name is not provided by any imperative API.
 
  As I see it, there are three main choices:
 
  1. HTMLElementElement is recast as a declarative form of
 document.register,
  in which case it would have no 'extends' attribute, and you need to make
  your own (complete) prototype.
 
  2. We make a new API for 'construct prototype from a tag-name to extend
 and
  a set of extensions'.
 
  3. Make document.register work like HTMLElementElement does now (it
 takes a
  tag-name and partial prototype).
 
  4. Let declarative syntax be a superset of the imperative API.
 
  Can you help me understand why you feel that imperative and
  declarative approaches must mirror each other exactly?
 
  :DG
 



 --
 erik



Re: IndexedDB, what were the issues? Non-reactable.

2013-03-08 Thread rektide
Part 1 - Finding issues, preventing recurence

This thread started as counter-rabble rousing. This search for problems, 
wanting desperately
to find some, to hunt for paths for avoiding a recurence- has yielded IMO 
triflingly small
big results.

Alex's discussion about understanding the state machine, about it not being
exposed yet having rules it expects, rings true with my own small experience 
with IndexedDB
spec reading I've done. In contrast my experience engineering IndexedDB powered 
play-toys
constrasts strongly: the spec just works without complication for my kind of 
very basic use
cases.

And I don't see a whole lot having reviewed the core point, which wasn't this 
spec, but how we
avoid repeating the mistakes. Having not found much consititueable as a 
mistakes this is IMO 
a hard case to learn from- but I think that original question was a cultural 
one, and I'd call
out that question as having gotten burried, which is natural along-side my 
claim that there
ain't much in the way of real actual problems around here.


Part 2 - An issue

I do have one issue I'd raise-

I'd love a more reactive data-store. When something changes it's more often the 
edge- the
change- that is interesting than the state or the value. We've recently added 
what IMO is the
most important advancement in the web, Observers, and damnit I demand my 
data-store be
observable too: places to dump bits ought be on the line to report what bits 
are being
dumped into them. Developers require all systems be able to report what's 
happening, without
requirying the entire data set comparing versus some separate cache.  This is, 
imo, the capital
lacking area IndexedDB has failed to touch upon.

I'd prefer an IndexedDB that at a minimum allows multiple active pariticpants 
(those holding
the data-store open at the time) to see what changes are being made to the 
store. I'd
further enjoy  relish in an IndexedDB that allowed me to setup persistent 
event sources that
when reconnected to would report on the changes they had been set up to monitor 
and log.

This is indeed implementable with a wrapper on top ( so was scanning the DOM 
for changes,
but many reactive systems resorted to .get/.set wrappers alike this wrapper). 
I'd like to see
some natural reactivity in the spec, some way for the IndexedDB to itself 
report what is going
on inside the store beyond the scope of schema changes, rather than this being 
a supplemental
grafted on system to the data-store.

In Cassandra world, to pick one random data-store example also discussing this 
broad topic
there's CASSANDRA-1311 - Triggers, to allow the Cassandra database to signal 
changes to the
store and perhaps perform actions in response. I think this is an important 
topic that
IMO has been largely overlooked, and I'd love to see a reactive data store that 
could be
more readily used in MVC use cases to act as a system of record to populate and 
keep in sync
dependent systems.
https://issues.apache.org/jira/browse/CASSANDRA-1311


Part 3 - Thanks

Thanks for reading. I'm so happy to have a data-store in the browser, and I 
think the spec
as it stands does well what it set out to do, and I look forwards to second 
passes to make
it look aesthetically kinder. Godspeed all.


Fair regards,
m rektide de la faye fowle




Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Scott Miles
I also want to keep ES6 classes in mind. Presumably in declarative form I
declare my class as if it extends nothing. Will 'super' still work in that
case?

Scott


On Fri, Mar 8, 2013 at 11:40 AM, Scott Miles sjmi...@google.com wrote:

 Mostly it's cognitive dissonance. It will be easy to trip over the fact
 that both things involve a user-supplied prototype, but they are required
 to be critically different objects.

 Also it's hard for me to justify why this difference should exist. If the
 idea is that element provides extra convenience, then why not make the
 imperative form convenient? If it's important to be able to do your own
 prototype marshaling, then won't this feature be missed in declarative form?

 I'm wary of defanging the declarative form completely. But I guess I want
 to break it down first before we build it up, if that makes any sense.

 Scott



 On Fri, Mar 8, 2013 at 9:55 AM, Erik Arvidsson a...@chromium.org wrote:

 If you have a tag name it is easy to get the prototype.

 var tmp = elementElement.ownerDocument.createElement(tagName);
 var prototype = Object.getPrototypeOf(tmp);

 On Fri, Mar 8, 2013 at 12:16 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com wrote:
  Currently, if I document.register something, it's my job to supply a
  complete prototype.
 
  For HTMLElementElement on the other hand, I supply a tag name to
 extend, and
  the prototype containing the extensions, and the system works out the
  complete prototype.
 
  However, this ability of HTMLElementElement to construct a complete
  prototype from a tag-name is not provided by any imperative API.
 
  As I see it, there are three main choices:
 
  1. HTMLElementElement is recast as a declarative form of
 document.register,
  in which case it would have no 'extends' attribute, and you need to
 make
  your own (complete) prototype.
 
  2. We make a new API for 'construct prototype from a tag-name to
 extend and
  a set of extensions'.
 
  3. Make document.register work like HTMLElementElement does now (it
 takes a
  tag-name and partial prototype).
 
  4. Let declarative syntax be a superset of the imperative API.
 
  Can you help me understand why you feel that imperative and
  declarative approaches must mirror each other exactly?
 
  :DG
 



 --
 erik





[webcomponents]: Custom element constructors are pinocchios

2013-03-08 Thread Dimitri Glazkov
As I started work on the components spec, I realized something terrible:

a) even if all HTML parsers could run script at any point when
constructing tree, and

b) even if all JS engines supported overriding [[Construct]] internal
method on Function,

c) we still can't make custom element constructors run exactly at the
time of creating an element in all cases,

d) unless we bring back element upgrade.

Here's why:

i) when we load component document, it blocks scripts just like a
stylesheet 
(http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

ii) this is okay, since our constructors are generated (no user code)
and most of the tree could be constructed while the component is
loaded.

iii) However, if we make constructors run at the time of tree
construction, the tree construction gets blocked much sooner, which
effectively makes component loading synchronous. Which is bad.

I see two ways out of this conundrum:

1) Give up on custom element constructors ever meeting the Blue Fairy
and becoming real boys, thus making them equivalent to readyCallback

Pros:
* Now that readyCallback and constructor are the same thing, we could
probably avoid a dual-path API in document.register

Cons:
* constructors are not real (for example, when a constructor runs, the
element is already in the tree, with all of the attributes set), so
there is no pure instantiation phase for an element

2) resurrect element upgrade

Pros:
* constructors are real

Cons:
* rejiggering document tree during upgrades will probably eat all (and
then some!) performance benefits of asynchronous load

WDYT?

:DG



Re: Persistent Storage vs. Database

2013-03-08 Thread Kyle Huey
On Fri, Mar 8, 2013 at 11:02 AM, Andrew Fedoniouk n...@terrainformatica.com
 wrote:

 On Thu, Mar 7, 2013 at 10:36 PM, Kyle Huey m...@kylehuey.com wrote:
  On Thu, Mar 7, 2013 at 10:20 PM, Andrew Fedoniouk
  n...@terrainformatica.com wrote:
 
  At least it is easier than http://www.w3.org/TR/IndexedDB/ :)
 
 
  Easier doesn't necessarily mean better.  LocalStorage is certainly
 easier to
  use than any async storage system ;-)
 

 At least my implementation does not use any events. Proposed
 system of events in IndexedDB is the antipattern indeed. Exactly for
 the same reasons as finalizer *events* you've mentioned above - there
 is no guarantee that all events will be delivered to the code awaiting
 and relaying on them.


That's not true at all.  If you don't understand the difference between
finalizers and events you're not going to be able to make a very informed
criticism of IndexedDB.

- Kyle


Re: [webcomponents]: Custom element constructors are pinocchios

2013-03-08 Thread Scott Miles
IMO, there is no benefit to 'real' constructors other than technical
purity, which is no joke, but I hate to use that as a club to beat users
with.

This is strictly anecdotal, but I've played tricks with 'constructor'
before (in old Dojo) and there was much hand-wringing about it, but to my
knowledge there was never even one bug report (insert grain-of-salt here).

The main thing is to try to make sure 'instanceof' is sane.


On Fri, Mar 8, 2013 at 11:27 AM, Dimitri Glazkov dglaz...@google.comwrote:

 As I started work on the components spec, I realized something terrible:

 a) even if all HTML parsers could run script at any point when
 constructing tree, and

 b) even if all JS engines supported overriding [[Construct]] internal
 method on Function,

 c) we still can't make custom element constructors run exactly at the
 time of creating an element in all cases,

 d) unless we bring back element upgrade.

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet (
 http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts
 )

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 I see two ways out of this conundrum:

 1) Give up on custom element constructors ever meeting the Blue Fairy
 and becoming real boys, thus making them equivalent to readyCallback

 Pros:
 * Now that readyCallback and constructor are the same thing, we could
 probably avoid a dual-path API in document.register

 Cons:
 * constructors are not real (for example, when a constructor runs, the
 element is already in the tree, with all of the attributes set), so
 there is no pure instantiation phase for an element

 2) resurrect element upgrade

 Pros:
 * constructors are real

 Cons:
 * rejiggering document tree during upgrades will probably eat all (and
 then some!) performance benefits of asynchronous load

 WDYT?

 :DG



Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Dimitri Glazkov
On Fri, Mar 8, 2013 at 10:41 AM, Anne van Kesteren ann...@annevk.nl wrote:
 Components don't directly correlate with custom elements. They are
 just documents that you can load together with your document. With
 things like multi-threaded parser, these are useful on their own, even
 without custom elements.

 Because they don't have an associated browsing context? What other use
 case are you describing here? That seems like a potential problem by
 the way. That subresources from such a document such as img will not
 load because there's no associated browsing context.

That's not the problem, that's a feature :) Think of it as a
template tag for documents.

The author can stash all of the markup that they don't need to render
on loading into components, and then use it when necessary as they
need it.

An easy example: suppose my webapp has multiple states/views that the
user goes through in random order. With components, I can leave the
starting view in master document, and move the rest into (multiple, if
needed) components. As I need the view, I simply grab it and move it
to the master document.

:DG



Re: The need to re-subscribe to requestAnimationFrame

2013-03-08 Thread Florian Bösch
Btw. just as a sidenote, the document in document.requestAnimationFrame
kind of matters. If you're calling it from the document that the canvas
isn't in, then you'll get flickering. That may sound funny, but it's
actually not that far fetched and is a situation you can run into if you're
transfering a canvas to a popup window or iframe. With a requestInterval
kind of function you're pretty much screwed in that case.


On Fri, Mar 8, 2013 at 7:43 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mar 2, 2013 6:32 AM, Florian Bösch pya...@gmail.com wrote:
 
  You can also wrap your own requestAnimationFrameInterval like so:
 
  var requestAnimationFrameInterval = function(callback){
var runner = function(){
  callback();
  requestAnimationFrame(runner);
};
runner();
  }
 
  This will still stop if there's an exception thrown by callback, but it
 lets you write a cleaner invocation like so:
 
  requestAnimationFrameInterval(function(){
// do stuff
  });
 
  It does not give you a way to stop that interval (except throwing an
 exception), but you can add your own if you're so inclined.
 
  Notably, you could not flexibly emulate requestAnimationFrame (single)
 via requestAnimationFrameInterval, so if you're gonna pick one semantic to
 implement, it's the former rather than the latter.

 For what it's worth, this would have been another (maybe better) way to
 address the concern that current spec tries to solve by requiring
 reregistration.

 I.e. we could have defined a

 id = requestAnimationFrameInterval(callback)
 cancelAnimationFrameInterval(id)

 Set of functions which automatically cancel the interval if an exception
 is thrown.

 That reduces the current risk that people write code that reregister at
 the top, and then has a bug further down which causes an exception to be
 thrown.

 / Jonas

 
 
  On Sat, Mar 2, 2013 at 3:15 PM, Glenn Maynard gl...@zewt.org wrote:
 
  On Sat, Mar 2, 2013 at 5:03 AM, David Bruant bruan...@gmail.com
 wrote:
 
  If someone wants to reuse the same function for
 requestionAnimationFrame, he/she has to go through:
  requestAnimationFrame(function f(){
  requestAnimationFrame(f);
  // do stuff
  })
 
 
  FYI, this pattern is cleaner, so you only have to call
 requestAnimationFrame in one place:
 
  function draw() {
  // render
  requestAnimationFrame(draw);
  }
  draw();
 
  --
  Glenn Maynard
 
 



Re: Persistent Storage vs. Database

2013-03-08 Thread Andrew Fedoniouk
On Thu, Mar 7, 2013 at 10:36 PM, Kyle Huey m...@kylehuey.com wrote:
 On Thu, Mar 7, 2013 at 10:20 PM, Andrew Fedoniouk
 n...@terrainformatica.com wrote:

 Physical commit (write) of objects to storage happens on either
 a) GC cycle or b) on explicit storage.commit() call or on c) VM shutdown.


 Persisting data off a GC cycle (via finalizers or something else) is a
 pretty well known antipattern.[0]

Raymond is right in general when he speaks about .NET runtime
environment. But JS computing environment is quite different from
the .NET one - at least these two have different lifespans of VMs and
memory heaps.

In my case persistence is closer to virtual memory mechanism -
when there is not enough memory persistable objects get swapped
to the storage from the heap. That is quite widespread pattern if
we want to use this language.

In any case there is always storage.commit() if you need
guarantees and deterministic storage state.


 At least it is easier than http://www.w3.org/TR/IndexedDB/ :)


 Easier doesn't necessarily mean better.  LocalStorage is certainly easier to
 use than any async storage system ;-)


At least my implementation does not use any events. Proposed
system of events in IndexedDB is the antipattern indeed. Exactly for
the same reasons as finalizer *events* you've mentioned above - there
is no guarantee that all events will be delivered to the code awaiting
and relaying on them.

-- 
Andrew Fedoniouk.

http://terrainformatica.com


  - Kyle

 [0] e.g.
 http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-08 Thread Scott González
On Fri, Mar 8, 2013 at 12:03 AM, Bronislav Klučka 
bronislav.klu...@bauglir.com wrote:

 On 7.3.2013 19:54, Scott González wrote:

 Who is killing anything?

 Hi, given
 http://lists.w3.org/Archives/**Public/public-webapps/**
 2013JanMar/0676.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0676.html
 I've misunderstood your point as advocating against Shadow altogether.


Ok, good to know that this was mostly just a miscommunication.



 2nd is is practical: not having to care about the internals, so I do not
 break it by accident from outside. If the only way to work with internals
 is by explicit request for internals and then working with them, but
 without the ability to breach the barrier accidentally, without the
 explicit request directly on the shadow host, this concern is satisfied and
 yes, there will be no clashes except for control naming.


My understanding is that you have to explicitly ask to walk into the
shadow, so this wouldn't happen accidentally. Can someone please confirm or
deny this?


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-08 Thread Scott Miles
Fwiw, I'm still following this thread, but so far Scott G. has been saying
everything I would say (good on ya, brother :P).

 My understanding is that you have to explicitly ask to walk into the
shadow, so this wouldn't happen accidentally. Can someone please confirm or
deny this? 

Confirmed. The encapsulation barriers are there to prevent you from
stumbling into shadow.


On Fri, Mar 8, 2013 at 12:14 PM, Scott González scott.gonza...@gmail.comwrote:

 On Fri, Mar 8, 2013 at 12:03 AM, Bronislav Klučka 
 bronislav.klu...@bauglir.com wrote:

 On 7.3.2013 19:54, Scott González wrote:

 Who is killing anything?

 Hi, given
 http://lists.w3.org/Archives/**Public/public-webapps/**
 2013JanMar/0676.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0676.html
 I've misunderstood your point as advocating against Shadow altogether.


 Ok, good to know that this was mostly just a miscommunication.



 2nd is is practical: not having to care about the internals, so I do not
 break it by accident from outside. If the only way to work with internals
 is by explicit request for internals and then working with them, but
 without the ability to breach the barrier accidentally, without the
 explicit request directly on the shadow host, this concern is satisfied and
 yes, there will be no clashes except for control naming.


 My understanding is that you have to explicitly ask to walk into the
 shadow, so this wouldn't happen accidentally. Can someone please confirm or
 deny this?



Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Dimitri Glazkov
On Fri, Mar 8, 2013 at 12:22 PM, Steve Orvell sorv...@google.com wrote:
 I also find the name confusing. It's common to use the term 'component' when
 describing the functionality of a custom element.

 What about HTML Modules?

Then we probably need to rename link rel=module for consistency?

:DG



Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Erik Arvidsson
On Fri, Mar 8, 2013 at 2:46 PM, Scott Miles sjmi...@google.com wrote:

 I also want to keep ES6 classes in mind. Presumably in declarative form I
 declare my class as if it extends nothing. Will 'super' still work in that
 case?


If you extend nothing (null) as in:

class Foo extends null {
  m() {
super();
  }
}

super calls will deref null which throws as expected.

Maybe I don't understand what you are asking?



 Scott


 On Fri, Mar 8, 2013 at 11:40 AM, Scott Miles sjmi...@google.com wrote:

 Mostly it's cognitive dissonance. It will be easy to trip over the fact
 that both things involve a user-supplied prototype, but they are required
 to be critically different objects.

 Also it's hard for me to justify why this difference should exist. If the
 idea is that element provides extra convenience, then why not make the
 imperative form convenient? If it's important to be able to do your own
 prototype marshaling, then won't this feature be missed in declarative form?

 I'm wary of defanging the declarative form completely. But I guess I want
 to break it down first before we build it up, if that makes any sense.

 Scott



 On Fri, Mar 8, 2013 at 9:55 AM, Erik Arvidsson a...@chromium.org wrote:

 If you have a tag name it is easy to get the prototype.

 var tmp = elementElement.ownerDocument.createElement(tagName);
 var prototype = Object.getPrototypeOf(tmp);

 On Fri, Mar 8, 2013 at 12:16 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com
 wrote:
  Currently, if I document.register something, it's my job to supply a
  complete prototype.
 
  For HTMLElementElement on the other hand, I supply a tag name to
 extend, and
  the prototype containing the extensions, and the system works out the
  complete prototype.
 
  However, this ability of HTMLElementElement to construct a complete
  prototype from a tag-name is not provided by any imperative API.
 
  As I see it, there are three main choices:
 
  1. HTMLElementElement is recast as a declarative form of
 document.register,
  in which case it would have no 'extends' attribute, and you need to
 make
  your own (complete) prototype.
 
  2. We make a new API for 'construct prototype from a tag-name to
 extend and
  a set of extensions'.
 
  3. Make document.register work like HTMLElementElement does now (it
 takes a
  tag-name and partial prototype).
 
  4. Let declarative syntax be a superset of the imperative API.
 
  Can you help me understand why you feel that imperative and
  declarative approaches must mirror each other exactly?
 
  :DG
 



 --
 erik






-- 
erik


Re: [webcomponents]: Custom element constructors are pinocchios

2013-03-08 Thread Jonas Sicking
It seems to me like you might be trying to solve a set of
contradictory requirements:

1. We want to enable implementing existing complex elements using
WebComponents
2. Running scripts in the middle of parsing is unsafe.
3. Exiting parsing for any complex element is slow.
4. We don't want to be unsafe or slow.

The advantage that built-in code is always going to have over code
provided by a page is that built-in code can be trusted to not attempt
to hack the browser.

So I simply don't think there is a way to enable implementing existing
complex elements using WebComponents without making some type of
exceptions for built-in implementations.

I.e. you could still use webcomponents to implement complex elements,
but you'd have to give them additional powers. For example in the form
of being able to run constructors (and maybe attribute mutation
handlers) synchronously.

/ Jonas

On Fri, Mar 8, 2013 at 11:27 AM, Dimitri Glazkov dglaz...@google.com wrote:
 As I started work on the components spec, I realized something terrible:

 a) even if all HTML parsers could run script at any point when
 constructing tree, and

 b) even if all JS engines supported overriding [[Construct]] internal
 method on Function,

 c) we still can't make custom element constructors run exactly at the
 time of creating an element in all cases,

 d) unless we bring back element upgrade.

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet 
 (http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 I see two ways out of this conundrum:

 1) Give up on custom element constructors ever meeting the Blue Fairy
 and becoming real boys, thus making them equivalent to readyCallback

 Pros:
 * Now that readyCallback and constructor are the same thing, we could
 probably avoid a dual-path API in document.register

 Cons:
 * constructors are not real (for example, when a constructor runs, the
 element is already in the tree, with all of the attributes set), so
 there is no pure instantiation phase for an element

 2) resurrect element upgrade

 Pros:
 * constructors are real

 Cons:
 * rejiggering document tree during upgrades will probably eat all (and
 then some!) performance benefits of asynchronous load

 WDYT?

 :DG




Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Steve Orvell
Indeed. Unfortunately, using 'module' here could be confusing wrt ES6
modules. Perhaps package is better?

The name is difficult. My main point is that using components causes
unnecessary confusion.


On Fri, Mar 8, 2013 at 12:24 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Fri, Mar 8, 2013 at 12:22 PM, Steve Orvell sorv...@google.com wrote:
  I also find the name confusing. It's common to use the term 'component'
 when
  describing the functionality of a custom element.
 
  What about HTML Modules?

 Then we probably need to rename link rel=module for consistency?

 :DG



Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Steve Orvell

 Also, it sounds like this specification should be titled Fetching
 components or some such as that's about all it defines.


I also find the name confusing. It's common to use the term 'component'
when describing the functionality of a custom element.

What about HTML Modules?


On Fri, Mar 8, 2013 at 1:19 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Mar 7, 2013 at 11:25 PM, Dimitri Glazkov dglaz...@google.com
 wrote:
  Please look over it. I look forward to your eagle-eyed insights in the
  form of bugs and emails.

 You try to monkey patch the obtain algorithm but in doing so you
 invoke a different fetch algorithm. One which does not expose
 resources as CORS-cross-origin. Also, for rel=component tainted
 resources make no sense, so we should only use No CORS in
 combination with fail.

 Why is Component not simply a subclass of Document? If you already
 have a Document object you might as well use that directly...

 Also, it sounds like this specification should be titled Fetching
 components or some such as that's about all it defines. Can't we just
 put all the component stuff in one specification? I find the whole
 organization quite confusing.


 --
 http://annevankesteren.nl/




Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Dimitri Glazkov
On Fri, Mar 8, 2013 at 12:30 PM, Steve Orvell sorv...@google.com wrote:
 Indeed. Unfortunately, using 'module' here could be confusing wrt ES6
 modules. Perhaps package is better?

 The name is difficult. My main point is that using components causes
 unnecessary confusion.

I understand. Welcome to the 2013 Annual Naming Contest/bikeshed. Rules:

1) must reflect the intent and convey the meaning.
2) link type and name of the spec must match.
3) no biting.

:DG



Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Robert Ginda
rel=include ?


On Fri, Mar 8, 2013 at 1:05 PM, Dimitri Glazkov dglaz...@google.com wrote:

 On Fri, Mar 8, 2013 at 12:30 PM, Steve Orvell sorv...@google.com wrote:
  Indeed. Unfortunately, using 'module' here could be confusing wrt ES6
  modules. Perhaps package is better?
 
  The name is difficult. My main point is that using components causes
  unnecessary confusion.

 I understand. Welcome to the 2013 Annual Naming Contest/bikeshed. Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG




Re: Persistent Storage vs. Database

2013-03-08 Thread Andrew Fedoniouk
On Fri, Mar 8, 2013 at 11:30 AM, Kyle Huey m...@kylehuey.com wrote:
 On Fri, Mar 8, 2013 at 11:02 AM, Andrew Fedoniouk
 n...@terrainformatica.com wrote:

 On Thu, Mar 7, 2013 at 10:36 PM, Kyle Huey m...@kylehuey.com wrote:
  On Thu, Mar 7, 2013 at 10:20 PM, Andrew Fedoniouk
  n...@terrainformatica.com wrote:
 
  At least it is easier than http://www.w3.org/TR/IndexedDB/ :)
 
 
  Easier doesn't necessarily mean better.  LocalStorage is certainly
  easier to
  use than any async storage system ;-)
 

 At least my implementation does not use any events. Proposed
 system of events in IndexedDB is the antipattern indeed. Exactly for
 the same reasons as finalizer *events* you've mentioned above - there
 is no guarantee that all events will be delivered to the code awaiting
 and relaying on them.


 That's not true at all.  If you don't understand the difference between
 finalizers and events you're not going to be able to make a very informed
 criticism of IndexedDB.


I would appreciate if you will define term `event`. After that we can discuss
it further.

As of common practice there are two types so called outbound calls:
1. Synchronous *callbacks* that are not strictly speaking events - just function
   references: after/before-doing-this-call-that. 'finalized' is
that kind of callback.
2. Events - these are objects and event dispatching system associated
   with them. For example UI events use capture/bubble dispatching
   system and event queue. Events operate independently from their
   handlers.

Let's take a look on this example from IndexedDB spec:

var request = indexedDB.open('AddressBook', 15);
 request.onsuccess = function(evt) {...};
 request.onerror = function(evt) {...};

It looks *very* wrong to me:

What should happen if db.open() will open the DB immediately?
Will request.onsuccess be called in this case?
If indexedDB.open is purely async operation then
why it has to be exactly async? What may take time there other
than opening local file in worst case? If that request is async then
when precisely request.onsuccess will be called?

I would understand if DB.open call will be defined this way:

function onsuccess(evt) {...};
function onerror(evt) {...};

var request = indexedDB.open('AddressBook', 15, onsuccess, onerror );

(so pure callback operation).

But purpose of such event-aganza is still not clear to me.

Why not classical:

try {
  request = indexedDB.open('AddressBook', 15 );
} catch(err) { ... }

?

In principle: what are operations in IndexedDB that may
take so long time that they need to be async?

Size of client side DB has and will always have some
reasonable cap limiting its size and so any lookup or
update operation in indexed DB will take some finite
and *very* short time (if to compare with the time needed
to relayout average page).

Why these so strange events are there at all?
And so why all this has to be that complex?


Andrew Fedoniouk.

http://terrainformatica.com



Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Bronislav Klučka
yes, it actually is document related to current document... does not 
seem confusing to me at all,

but I can go with fragment or stub as well :]

B.


On 8.3.2013 22:25, Dimitri Glazkov wrote:

On Fri, Mar 8, 2013 at 1:15 PM, Scott Miles sjmi...@google.com wrote:

Agree. Seems like Dimitri and Anne decided that these targets are
'document', did they not?

rel=document seems to communicate that the relation of the linked
resources to the document is document, which is at least cyclical if
not totally wrong :)

:DG






Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Dimitri Glazkov
On Fri, Mar 8, 2013 at 1:15 PM, Scott Miles sjmi...@google.com wrote:
 Agree. Seems like Dimitri and Anne decided that these targets are
 'document', did they not?

rel=document seems to communicate that the relation of the linked
resources to the document is document, which is at least cyclical if
not totally wrong :)

:DG



Re: The .shadowRoot property and WebComponents

2013-03-08 Thread Dimitri Glazkov
On Fri, Mar 8, 2013 at 1:13 PM, Jonas Sicking jo...@sicking.cc wrote:
 Related to the ongoing discussion about whether to expose the shadow
 tree of web components by default or not, but somewhat orthogonal to
 it, I think there is a question of *how* to expose the web component
 shadow tree.

 If I understand things correct, the .shadowRoot property and the
 createShadowRoot function behaves very different on elements that have
 a shadow tree attached through the use of WebComponents, compared to
 if it doesn't.

There's no such distinction as far as I know, but maybe I am not
seeing something. By WebComponents, do you mean custom elements?


 With an element with no attached web component, a page can rely on the
 fact that it can use .createShadowRoot in order to attach its own
 custom shadow root to an element. And it can rely on that the
 .shadowRoot property is null if it hasn't called .createShadowRoot and
 returns the shadow root created using createShadowRoot otherwise.

The custom element should never rely on this. .shadowRoot returns the
youngest shadow root, and not necessarily the shadow root that you got
when calling .createShadowRoot. For example, an extension could have
created a shadowRoot even for a custom element.


 But if a webcomponent has attached a shadow tree, then the .shadowRoot
 and createShadowRoot API suddenly behaves differently.

I am trying, but still failing to see the difference. Can you help me
understand it a bit better?


 I think there's value in enabling authors to always use .shadowRoot
 and createShadowRoot in order to attach a page level shadow tree to
 an element, and that that should work independently of if a web
 component also has attached a shadow tree.

Shadow trees are not coupled with custom elements in any way. They're
just a DOM API custom elements could use.

 If there's shadow tree attached using both createShadowRoot and using
 web components, then the two extend each other using the shadow
 element the same way that multiple shadow trees attached using web
 components do.

Now I am lost :) Again, there's no distinction between createShadowRoot usage.


 So for the cases when a web component chooses to expose its shadow
 tree, it should do so using some other API than .shadowRoot.

 Another way to look at it is that for a web component that chooses
 *not* to expose its shadow tree, the .shadowRoot property should still
 be useable and show no signs of there being a shadow tree attached
 through WebComponents.

Can you help me understand what you meant by attached through
WebComponents? Perhaps this is where the dog is buried.

:DG



The .shadowRoot property and WebComponents

2013-03-08 Thread Jonas Sicking
Related to the ongoing discussion about whether to expose the shadow
tree of web components by default or not, but somewhat orthogonal to
it, I think there is a question of *how* to expose the web component
shadow tree.

If I understand things correct, the .shadowRoot property and the
createShadowRoot function behaves very different on elements that have
a shadow tree attached through the use of WebComponents, compared to
if it doesn't.

With an element with no attached web component, a page can rely on the
fact that it can use .createShadowRoot in order to attach its own
custom shadow root to an element. And it can rely on that the
.shadowRoot property is null if it hasn't called .createShadowRoot and
returns the shadow root created using createShadowRoot otherwise.

But if a webcomponent has attached a shadow tree, then the .shadowRoot
and createShadowRoot API suddenly behaves differently.

I think there's value in enabling authors to always use .shadowRoot
and createShadowRoot in order to attach a page level shadow tree to
an element, and that that should work independently of if a web
component also has attached a shadow tree.

If there's shadow tree attached using both createShadowRoot and using
web components, then the two extend each other using the shadow
element the same way that multiple shadow trees attached using web
components do.

So for the cases when a web component chooses to expose its shadow
tree, it should do so using some other API than .shadowRoot.

Another way to look at it is that for a web component that chooses
*not* to expose its shadow tree, the .shadowRoot property should still
be useable and show no signs of there being a shadow tree attached
through WebComponents.

/ Jonas



Re: File API: Blob.type

2013-03-08 Thread Glenn Maynard
On Fri, Mar 8, 2013 at 3:43 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Mar 7, 2013 at 6:35 PM, Arun Ranganathan
 aranganat...@mozilla.com wrote:
  But I'm not sure about why we'd choose ByteString in lieu of being strict
  with what characters are allowed within DOMString.  Anne, can you shed
 some
  light on this?  And of course we should eliminate CR + LF as a
 possibility
  at constructor invocation time, possibly by throwing.

 MIME/HTTP consists of byte sequences, not code points. ByteString is a
 basic JavaScript string with certain restrictions on it to match the
 byte sequence semantics, while still behaving like a string.


MIME types are definitely strings of codepoints.  They're just strings.  We
wouldn't make script type or style type a ByteString.

And again, ByteString doesn't have anything to do with preventing CR/LF
from entering Blob.type.  You can still put CR/LF in ByteString, it does
nothing to solve the problem raised here.

ByteString is just a hack to deal with the unpleasant legacy of XHR not
encoding and decoding header text.  Don't leak that mess into Blob.  All
that's needed is the simple check I mentioned earlier (and similar
filtering in other places @type can be sourced from, if there are any other
places this could happen).

-- 
Glenn Maynard


security model of Web Components, etc. - joint work with WebAppSec?

2013-03-08 Thread Hill, Brad
WebApps WG,

  I have been following with interest (though with less time to give it the 
attention I wish) the emergence of Web Components and related specifications. 
(HTML Templates, Shadow DOM, etc.)

 I  wonder if it would be a good time to start discussing the security model 
jointly with the WebAppSec WG, both on list, and possibly at the upcoming F2F 
in April?

  One of our goals in WebAppSec is that a mashup web of re-usable and 
composable pieces be possible to do securely. An example anti-pattern in this 
area is the widely deployed script src=someothersite.com/canOwnYou.js 
pattern for things like analytics, social widgets and social login.  This 
pattern makes the Web more brittle, such as the Facebook broke the Internet 
bug recently when a script error in Facebook Connect redirected a huge chunk of 
the Web to a Facebook error page.   We security folks that work in both the web 
apps and PKI areas stay awake at night worrying about bad guys getting a 
certificate for Google Analytics or Omniture and XSS-ing 90% of the Web.

  I don't see much in these specs or via a quick search of the list archives on 
the security models for the new Web Component and Shadow DOM type integration 
models when they involve foreign components.  There is some level of isolation 
implied, but I hope there is interest in defining what, if any, the security 
guarantees of such are and how we might make this kind of composition more 
pleasant and useful than a sandboxed iframe, but still robust against errors or 
attacks such that popular components don't become single points of failure for 
the entire Web.

Thanks,

Brad Hill
Co-Chair, WebAppSec


Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Scott Miles
Agree. Seems like Dimitri and Anne decided that these targets are
'document', did they not?

Scott


On Fri, Mar 8, 2013 at 1:12 PM, Bronislav Klučka 
bronislav.klu...@bauglir.com wrote:

 hi
 let's apply KISS here
 how about just
 rel=document
 or
 rel=htmldocument

 Brona


 On 8.3.2013 22:05, Dimitri Glazkov wrote:

 On Fri, Mar 8, 2013 at 12:30 PM, Steve Orvell sorv...@google.com wrote:

 Indeed. Unfortunately, using 'module' here could be confusing wrt ES6
 modules. Perhaps package is better?

 The name is difficult. My main point is that using components causes
 unnecessary confusion.

 I understand. Welcome to the 2013 Annual Naming Contest/bikeshed. Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG







Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Bronislav Klučka

hi
let's apply KISS here
how about just
rel=document
or
rel=htmldocument

Brona

On 8.3.2013 22:05, Dimitri Glazkov wrote:

On Fri, Mar 8, 2013 at 12:30 PM, Steve Orvell sorv...@google.com wrote:

Indeed. Unfortunately, using 'module' here could be confusing wrt ES6
modules. Perhaps package is better?

The name is difficult. My main point is that using components causes
unnecessary confusion.

I understand. Welcome to the 2013 Annual Naming Contest/bikeshed. Rules:

1) must reflect the intent and convey the meaning.
2) link type and name of the spec must match.
3) no biting.

:DG







Re: Persistent Storage vs. Database

2013-03-08 Thread Glenn Maynard
On Fri, Mar 8, 2013 at 12:36 AM, Kyle Huey m...@kylehuey.com wrote:

 On Thu, Mar 7, 2013 at 10:20 PM, Andrew Fedoniouk 
 n...@terrainformatica.com wrote:

 Physical commit (write) of objects to storage happens on either
 a) GC cycle or b) on explicit storage.commit() call or on c) VM shutdown.


 Persisting data off a GC cycle (via finalizers or something else) is a
 pretty well known antipattern.[0]


Correct, but just to be clear, there are other ways to get a similar
effect.  In particular, you can queue a task, add a job to the global
script clean-up jobs list, or use a microtask.

At least it is easier than http://www.w3.org/TR/IndexedDB/ :)


Note that if you see TR in a URL, you're probably looking at an old,
obsolete spec.  This one is almost a year out of date.  Click the editor's
draft link at the top to get to the real spec.
https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html

On Fri, Mar 8, 2013 at 1:02 PM, Andrew Fedoniouk
n...@terrainformatica.comwrote:

 At least my implementation does not use any events. Proposed
 system of events in IndexedDB is the antipattern indeed. Exactly for
 the same reasons as finalizer *events* you've mentioned above - there
 is no guarantee that all events will be delivered to the code awaiting
 and relaying on them.


This is wrong.  Events are not an antipattern (calling things that seems
a bit of a fad these days), and they're certainly not a proposal.  It's
the standard, well-established API on the Web, used broadly across the
whole platform.

-- 
Glenn Maynard


Re: Persistent Storage vs. Database

2013-03-08 Thread Jonas Sicking
On Fri, Mar 8, 2013 at 2:27 PM, Andrew Fedoniouk
n...@terrainformatica.com wrote:
 On Fri, Mar 8, 2013 at 11:30 AM, Kyle Huey m...@kylehuey.com wrote:
 On Fri, Mar 8, 2013 at 11:02 AM, Andrew Fedoniouk
 n...@terrainformatica.com wrote:

 On Thu, Mar 7, 2013 at 10:36 PM, Kyle Huey m...@kylehuey.com wrote:
  On Thu, Mar 7, 2013 at 10:20 PM, Andrew Fedoniouk
  n...@terrainformatica.com wrote:
 
  At least it is easier than http://www.w3.org/TR/IndexedDB/ :)
 
 
  Easier doesn't necessarily mean better.  LocalStorage is certainly
  easier to
  use than any async storage system ;-)
 

 At least my implementation does not use any events. Proposed
 system of events in IndexedDB is the antipattern indeed. Exactly for
 the same reasons as finalizer *events* you've mentioned above - there
 is no guarantee that all events will be delivered to the code awaiting
 and relaying on them.


 That's not true at all.  If you don't understand the difference between
 finalizers and events you're not going to be able to make a very informed
 criticism of IndexedDB.


 I would appreciate if you will define term `event`. After that we can discuss
 it further.

https://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#interface-event

 As of common practice there are two types so called outbound calls:
 1. Synchronous *callbacks* that are not strictly speaking events - just 
 function
references: after/before-doing-this-call-that. 'finalized' is
 that kind of callback.
 2. Events - these are objects and event dispatching system associated
with them. For example UI events use capture/bubble dispatching
system and event queue. Events operate independently from their
handlers.

 Let's take a look on this example from IndexedDB spec:

 var request = indexedDB.open('AddressBook', 15);
  request.onsuccess = function(evt) {...};
  request.onerror = function(evt) {...};

 It looks *very* wrong to me:

 What should happen if db.open() will open the DB immediately?

The event fires asynchronously. Since JS is single threaded that means
that the onsuccess and onerror properties will be set before the event
is fired.

 Will request.onsuccess be called in this case?

Yes.

 If indexedDB.open is purely async operation then
 why it has to be exactly async?

Because it requires IO.

 What may take time there other
 than opening local file in worst case? If that request is async then
 when precisely request.onsuccess will be called?

 I would understand if DB.open call will be defined this way:

 function onsuccess(evt) {...};
 function onerror(evt) {...};

 var request = indexedDB.open('AddressBook', 15, onsuccess, onerror );

 (so pure callback operation).

If I write

var x = 0;
function onsuccess(evt) { x = 1 };
function onerror(evt) {...};
var request = indexedDB.open('AddressBook', 15, onsuccess, onerror );
console.log(x);

would you expect the console to sometimes show 1 and sometimes show 0?
If not, why not?

/ Jonas



Re: Streams and Blobs

2013-03-08 Thread Jonas Sicking
On Fri, Mar 8, 2013 at 7:52 AM, Glenn Maynard gl...@zewt.org wrote:
 On Thu, Mar 7, 2013 at 9:40 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Mar 7, 2013 at 4:42 PM, Glenn Maynard gl...@zewt.org wrote:
  The alternative argument is that XHR should represent the data source,
  reading data from the network and pushing it to Stream.

 I think this is the approach I'd take. At least in Gecko this would
 allow the XHR code to generally do the same thing it does today with
 regards to actions taken on incoming network data. The only thing we'd
 do differently is which consumer to send the data to. We already have
 several such consumers which are used to implement the different
 .responseType modes, so adding another one fits right in with that
 model.


 But what about the issues I mentioned (you snipped them)?  We would be
 introducing overlap between XHR and every consumer of URLs
 (HTMLImageElement, HTMLVideoElement, CSS loads, CSS subresources, other
 XHRs), which could each mean all kinds of potential script-visible interop
 subtleties.

As long as we define the order between when data is going into the
Stream, and when the events are fired on the XHR object, I think that
takes care of these issues.

 Some more issues:

 - What happens if you do a sync XHR?  It would block forever, since you'll
 never see the Stream in time to hook it up to a consumer.  You don't want to
 just disallow this, since then you can't set up streams synchronously at
 all.  With the XHR finishes immediately model, this is straightforward:
 XHR returns as soon as the headers are finished, giving you the Stream to do
 whatever you need with.

Sync XHR already can't use .responseType, so there is no way for sync
XHR to return a Stream object. We should put the same restriction on
Sync XHR accepting a Stream as a request body.

 - What if you create an async XHR, then hook it up to a sync XHR?  Async XHR
 only does work during the event loop, so this would deadlock (the async XHR
 would never run to feed data to the sync one).

Same as above.

 - You could set up an async XHR in one worker, then read it synchronously
 with XHR in another worker.  This means the first worker could block the
 second worker at will, eg. by running a blocking operation during an
 onprogress event, to prevent returning to the event loop.  I'm sure we don't
 want to allow that (at least without careful thought, eg. the synchronous
 messaging idea).

This is a good point. We probably shouldn't allow sync XHR in workers
either to accept or produce Stream objects.

 From an author point of view it also means that the XHR object behaves
 consistently for all .responseTypes. I.e. the same set of events are
 fired and the XHR object goes through the same set of states. The only
 difference is in how the data is consumed.

 It would be less consistent, not more.

 With the supply-the-stream-and-it's-done model, XHR follows the same model
 it normally does: you start a request, XHR does some work, and onload is
 fired once the result is ready for you to use.

This is not correct. All of .response, .responseText and .responseXML
are often available much before that.

 With the runs-for-the-duration-of-the-stream model, when is the .response
 available?

Ideally as soon as .send() is called. If that causes problem then
maybe as soon as we enter readystate 3.

/ Jonas