Re: [clipboard events] seeking implementor feedback on using CID: URI scheme for pasting embedded binary data
Hi Hallvord! Hi Ben! Thanks for responding to my request for feedback - especially since the IE team has done some interesting work in this area and is arguably ahead of the rest! :-) The IE11 API you mentioned is msConvertURL [1] (also on the IE blog [2]), and it was designed as a simple way for sites to choose DataURI or Blob for otherwise inaccessible images. X As you mention, it is not possible to tell which file/image corresponds to which img because it’s really designed as a simple approach for cases where a site wants to always use blob or dataURI for images that they couldn’t otherwise access. That's more or less what I thought, based on the blog post, so thanks for describing it in detail. We are considering doing the CID approach as well in the future. It is nice to have the additional control of seeing which img src you are changing, and it will likely work better for copy, not just paste like convertURL. Actually, I haven't truly considered the copy case here yet. I've sort of assumed that given that you can put multiple bits of data on a clipboard, the various clipboard implementations should already have a way one piece of data can reference one specific other piece of data - I haven't really found the technical details here. Of course it would be nice to support a script that wants to generate random HTML with embedded files to place on the clipboard (although I think most of those use cases can already be met by using URLs and assuming that any software reading HTML from the clipboard can understand URLs..). However, one can imagine a use case for example with a CANVAS app where the script wants to copy the state of the CANVAS as an image inside HTML it places on the clipboard - having the script create src=cid:n type markup, append files, and make the UA translate that to the platform's native clipboard implementation's way of referencing one part on the clipboard from another part.. We believe that convertURL does not block using the CIDs you have in the current spec. I suppose not, but perhaps the more relevant question is: should we standardise convertURL? Would it still have a use case if we take the cid: route? (And I guess a related question is: given that we've done data: URLs for a while, how much content will we break if, say, Firefox moves from data: to cid:? Do we need to make cid: opt-in somehow, like you're doing with convertURL?) To better understand your approach and allow us to help move it forward, can you give us sample javascript that a site would use to set the DataTransferItems for HTML and the related images during copy? As I said, I have not really considered this use case - so the spec doesn't actually cover this. If we want to make this work, I suppose the JS would look somewhat like this? document.addEventListener('copy', function(e){ // So, you want to copy the current status of your game? No problem. // Let's say this game generates a number of PNG graphics from CANVAS tags // and keeps them in memory in ArrayBuffers or something var html = 'divbplayer/b\'s medals: img src=cid:1img src=cid:2/div'; e.clipboardData.items.add(html, 'text/html'); e.clipboardData.items.add(new File(medals[0], 'medal.png', {type:'image/png'})); e.clipboardData.items.add(new File(medals[1], 'medal.png', {type:'image/png'})); e.preventDefault(); }, false); Second, can you provide the javascript for how a site would put them into the pasted markup during paste? The way I thought this would work, would be that the site starts XHR uploads from the paste processing, and shows some intermediate 'loading' animation or something before it gets the final URLs back from the server. A bit like this (although some things could be more elegant, like the insertion of the data which needs to take cursors and selections into account): http://jsfiddle.net/2Qqrp/ Thinking about it, it may be considered somewhat wasteful (or exceedingly slow - if for example the embedded data is a video) to do it this way - but it quickly gets complex and/or confusing if we have a some show this local file until it's uploaded, then reference the online file instead magic..? -Hallvord
RE: [clipboard events] seeking implementor feedback on using CID: URI scheme for pasting embedded binary data
Hi Hallvord! The IE11 API you mentioned is msConvertURL [1] (also on the IE blog [2]), and it was designed as a simple way for sites to choose DataURI or Blob for otherwise inaccessible images. We default to DataURI to be interoperable with Firefox’s current design of always doing DataURI for local images on the clipboard. With a little bit of code, blobs can be used instead. Our clipboardData.files implementation contains only images at the moment because we have only partial support of the current version of the Clipboard API spec. This means that with a quick ‘for’ loop over those images, javascript can choose to create blobs, manage the blob’s memory, and upload the blobs to a server. As you mention, it is not possible to tell which file/image corresponds to which img because it’s really designed as a simple approach for cases where a site wants to always use blob or dataURI for images that they couldn’t otherwise access. We are considering doing the CID approach as well in the future. It is nice to have the additional control of seeing which img src you are changing, and it will likely work better for copy, not just paste like convertURL. We believe that convertURL does not block using the CIDs you have in the current spec. To better understand your approach and allow us to help move it forward, can you give us sample javascript that a site would use to set the DataTransferItems for HTML and the related images during copy? Second, can you provide the javascript for how a site would put them into the pasted markup during paste? Regarding a couple other questions you ask: * As far as I’ve seen, WebKit/Blink do not yet support images with local-system sources. They do support binary images on the clipboard using ClipboardData.items like in the example I found online [3]. * Some sites prefer DataURI to Blob because it’s all inline and doesn’t require sending separate objects or managing memory, so I don’t think DataURI is something we should discount. Looking forward to seeing your sample code! Ben Peters [1] http://msdn.microsoft.com/en-us/library/ie/dn254951(v=vs.85).aspx [2] http://blogs.msdn.com/b/ie/archive/2013/10/24/enhanced-rich-editing-experiences-in-ie11.aspx [3] http://strd6.com/2011/09/html5-javascript-pasting-image-data-in-chrome/ -Original Message- From: Hallvord R. M. Steen [mailto:hst...@mozilla.com] Sent: Thursday, January 23, 2014 5:30 PM To: public-webapps Subject: [clipboard events] seeking implementor feedback on using CID: URI scheme for pasting embedded binary data Hi, pasting HTML that contains embeds (images, video) into a rich text editor is a use case we should cover. It's currently handled in different ways - * IE11 supports pasting images as either data: URLs or blobs [1] (and has a non-standard method to fill in a gap in the blob approach). I don't understand from this blog post how/if it supports referencing the binary parts from the HTML. If for example you paste a snippet from a Word page that contains two images, the DataTransferItemList is presumably populated with two image files, which can be processed/uploaded using the blob method - but how is the script processing the data supposed to know what IMG tag in the pasted HTML each image file belongs to? * Pasting stuff as data: URLs seems like a hack, wasting memory and requiring quite some extra processing if there is a lot of data. * Firefox apparently happily passes on file:/// URLs with local paths and all [2], this is of course a bug. * Right now I'm not sure what WebKit/Blink - based implementations do. Test results welcome! As the editor of the Clipboard Events spec, I'm proposing a somewhat different take on this: cid:-URIs for embeds. See http://dev.w3.org/2006/webapi/clipops/clipops.html (search for cid:). The idea is that rather than embedding potentially very huge data: URLs or reference local files in the embedded markup, we add a reference to the DataTransferItemList, and use the index of this reference to construct a cid: URI in the markup that clipboardEvent.getData('text/html') will see. The script processing this data can then pull out the cid: URIs, do drag-and-drop style file uploads for referenced clipboard parts, and update the data to refer to the locations on the server eventually (maybe first using an intermediate placeholder image or something like that.) AFAIK, outside of the used in HTML intended for E-mail, this would be the first usage of CID: URIs in web platform specs. I'm looking for feedback regarding whether this is implementable and a good solution. I haven't had much (if any) feedback from implementors on this issue yet, so thank you all in advance for your ideas and input. -Hallvord [1] http://blogs.msdn.com/b/ie/archive/2013/10/24/enhanced-rich-editing-experiences-in-ie11.aspx [2] https://bugzilla.mozilla.org/show_bug.cgi?id=665341
[progress-events] Progress Events is a W3C Recommendation
Congratulations to Anne, Jungkee and Chaals on the publication of a Progress Events Recommendation http://www.w3.org/TR/2014/REC-progress-events-20140211/ . (I updated this spec's PubStatus data to state that no additional work is planned and that the feature is now part of XHR.)
Re: Extending Mutation Observers to address use cases of
On 02/12/2014 04:27 AM, Ryosuke Niwa wrote: On Feb 11, 2014, at 6:06 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote: * Olli Pettay wrote: We could add some scheduling thing to mutation observers. By default we'd use microtask, since that tends to be good for various performance reasons, but normal tasks or nanotasks could be possible too. Right, we need some sort of a switch. I'm not certain if we want to add it as a per-observation option or a global switch when we create an observer. My guy feeling is that we want the latter. It would be weird for some mutation records to be delivered earlier than others to the same observer. Yeah, I was thinking per observer. Something like var m = new MutationObserver(callback, { interval: task} ); m.observe(document, { childList: true, subtree: true}); Some devtools devs have asked for adding 'interval: nanotask' thing I was thinking to add such thing only for addons and such in Gecko, because it brings back some of the performance problems Mutation Events have. But if web components stuff would be less special with such option, perhaps it should be enabled for all. -Olli I'd like to know exact semantics requirements before start jumping into details though. This sounds like adding a switch that would dynamically invalidate assumptions mutation observers might make, which sounds like a bad idea. Could you elaborate? I don't really follow what the problem is. Could you elaborate on what you see as a problem? - R. Niwa
Re: [progress-events] Progress Events is a W3C Recommendation
Thanks Art. The work is attributed to Anne, Chaals and Ms2ger. Thanks! On Feb 12, 2014 9:28 PM, Arthur Barstow art.bars...@nokia.com wrote: Congratulations to Anne, Jungkee and Chaals on the publication of a Progress Events Recommendation http://www.w3.org/TR/2014/ REC-progress-events-20140211/ . (I updated this spec's PubStatus data to state that no additional work is planned and that the feature is now part of XHR.)
Re: Officially deprecating main-thread synchronous XHR?
On Sat, 8 Feb 2014, at 12:19, James Greene wrote: There are certain situations where sync XHRs are, in fact, required... unless we make other accommodations. For example, in the Clipboard API, developers are allowed to inject into the clipboard as a semi-trusted event during the event handling phase of certain user-initiated events (e.g. `click`).[1] This has not yet been implemented in any browsers yet. However, if browser vendors choose to treat this scenario as it is treated for Flash clipboard injection, then the semi-trusted state ends after the default action for that event would occur.[2] For Flash clipboard injection, this means that any required on-demand XHRs must be resolved synchronously. For the DOM Clipboard API, it would be nice to either still be able to use sync XHRs or else we would need to specially authorize async XHRs that are started during the semi-trusted state to have their completion handlers also still resolve/execute in a semi-trusted state. Couldn't the semi-trusted state be kept for any promise created while the semi-trusted is set? In other words, promises could keep the semi-trusted state along the chaining. Though, as Olli said, this is something the Clipboard API specification should fix. -- Mounir
Re: [manifest] orientation member
On Monday, January 27, 2014 at 9:47 PM, Jonas Sicking wrote: On Mon, Jan 13, 2014 at 1:44 AM, Marcos Caceres w...@marcosc.com (mailto:w...@marcosc.com) wrote: Ok, makes sense. So my counter questions are: 1. Could we get away without using generic media queries and instead only allow switching on screen size height and/or width? Probably - need to check if there is anything else on which this decision would be made. This is something we will need to look into. 2. Could we get away with just using static orientations in v1? I.e. punt using different orientations for mobile/tablet until v2 of the manifest? Personally, I think so. That's currently what is in the spec [1]. [1] http://w3c.github.io/manifest/#default_orientation-member
Re: [progress-events] Progress Events is a W3C Recommendation
On Wed, 12 Feb 2014 16:49:07 +0400, Jungkee Song jungk...@gmail.com wrote: Thanks Art. The work is attributed to Anne, Chaals and Ms2ger. Thanks! Actually, I did editing, like Jungkee, and the work is attributed to quite a lot of people, many of whom worked on it before the WebApps group existed. Anyway congratulations to everyone. cheers Chaals On Feb 12, 2014 9:28 PM, Arthur Barstow art.bars...@nokia.com wrote: Congratulations to Anne, Jungkee and Chaals on the publication of a Progress Events Recommendation http://www.w3.org/TR/2014/ REC-progress-events-20140211/ . (I updated this spec's PubStatus data to state that no additional work is planned and that the feature is now part of XHR.) -- Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex cha...@yandex-team.ru Find more at http://yandex.com
[Bug 24632] New: [meta][imports]: The spec should have fewer monkey patches
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24632 Bug ID: 24632 Summary: [meta][imports]: The spec should have fewer monkey patches Product: WebAppsWG Version: unspecified Hardware: PC OS: Linux Status: NEW Severity: normal Priority: P2 Component: Component Model Assignee: dglaz...@chromium.org Reporter: morr...@google.com QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org Blocks: 20683 Spawned from Bug 24623: In general we should avoid monkey patching. 1) A specification may not be aware of the patches being applied and invalidate them. 2) Readers of a specification may not be aware of the patches being applied and write code that turns out to be wrong due to patches. 3) If patches start getting applied from multiple sources the mental model quickly becomes too hard and mistakes start creeping in all over. We have some for now. This is a master bug to track these to be removed. -- You are receiving this mail because: You are on the CC list for the bug.
Re: Extending Mutation Observers to address use cases of
I pushed the Web Components folks about exactly this issue (why aren't these callbacks just MutationObservers?) early last year. They convinced me (and I remain convinced) that these signals should be Custom Element callbacks and not Mutation Observer records Here's the logic that convinced me: Custom Element are a *strongly coupled concern*, while Mutation Observers *allow for* multiple decoupled concerns to peacefully co-exist. In a certain sense, you can extend the argument that CE callbacks should be MO records, and you arrive at the conclusion that you don't need Custom Elements at all -- that everything can be implemented with Mutation Observers. But the point of Custom Elements is two fold: 1) To allow implementation of Elements by user-space code in roughly the same model as privileged code. 2) To explain the platform. Put another way: the *implementation* of an element simply needs to be privileged in some respects. For custom elements, this means a) There can only be one. I.e., we don't allow multiple registration of the same element: Primary behavior is the domain of custom elements, secondary behavior is the domain of Mutation Observers b) Callbacks need to fire ASAP. It's important that the implementation of an element get a chance to respond to events before other concerns so that it can create a synchronously consistent abstraction To my mind, Custom Elements callbacks really *should* be fully sync (yes, including firing createdCallback during parse), but various technical and security constraints make that impossible. In short, Custom Elements and Mutation Observers are servicing very different needs. Custom Elements are privileged, but limited and singular (I can only react to changes in myself and I'm the only responding party), while Mutation Observers are unprivileged, pervasive and multiple (I get to respond to anything in the document, and there are likely other parties doing work in the same place I am). Therefore, it is neither a good idea to make Custom Elements more async, nor a good idea to make Mutation Observers more sync. --- One final note. UNRELATED to custom elements' implementation. I think there's an argument to be made that Mutation Observers *should* be extended to allow for observation of trees which include DOM reachable through shadowRoots. The motivation for this would be to allow existing de-coupled concerns to operate faithfully in the presence of custom elements implemented with shadowDOM. The obvious concern here is that de-coupled code may interfere with the implementation of elements, but that's no more true with custom elements than it is today, and shadowRoot is imperatively public, it's consistent to allow MutationObservers to continue to fully observe a document. However, I don't think there's any rush to do this. Just something to think about for a post-shadowDOM world. On Wed, Feb 12, 2014 at 4:49 AM, Olli Pettay olli.pet...@helsinki.fiwrote: On 02/12/2014 04:27 AM, Ryosuke Niwa wrote: On Feb 11, 2014, at 6:06 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote: * Olli Pettay wrote: We could add some scheduling thing to mutation observers. By default we'd use microtask, since that tends to be good for various performance reasons, but normal tasks or nanotasks could be possible too. Right, we need some sort of a switch. I'm not certain if we want to add it as a per-observation option or a global switch when we create an observer. My guy feeling is that we want the latter. It would be weird for some mutation records to be delivered earlier than others to the same observer. Yeah, I was thinking per observer. Something like var m = new MutationObserver(callback, { interval: task} ); m.observe(document, { childList: true, subtree: true}); Some devtools devs have asked for adding 'interval: nanotask' thing I was thinking to add such thing only for addons and such in Gecko, because it brings back some of the performance problems Mutation Events have. But if web components stuff would be less special with such option, perhaps it should be enabled for all. -Olli I'd like to know exact semantics requirements before start jumping into details though. This sounds like adding a switch that would dynamically invalidate assumptions mutation observers might make, which sounds like a bad idea. Could you elaborate? I don't really follow what the problem is. Could you elaborate on what you see as a problem? - R. Niwa
[manifest] V1 ready for wider review
The editors of the [manifest] spec have now closed all substantive issues for v1. The spec defines the following: * A link relationship for manifests (so they can be used with link rel=manifest). * A standard file name for a manifest resource (/.well-known/manifest.json). Works the same as /favicon.ico for when link rel=manifest is missing. * The ability to point to a start-url. * Basic screen orientation hinting for when launching a web app. * Launch the app in different display modes: fullscreen, minimal-ui, open in browser, etc. * A way of for scripts to check if the application was launched from a bookmark (i.e., similar to Safari's navigator.standalone). * requestBookmark(), which is a way for a top-level document to request it be bookmarked by the user. To not piss-off users, requires explicit user action to actually work. Expect buttoninstall my app/button everywhere on the Web now :) If you are wondering where some missing feature is, it's probably slated for [v2]. The reason v1 is so small is that it's all we could get agreement on amongst implementers (it's a small set, but it's a good set to kick things off and get us moving... and it's a small spec, so easy to quickly read over). We would appreciate your feedback on this set of features - please file [bugs] on GitHub. We know it doesn't fully realize *the dream* of installable web apps - but it gets us a few steps closer. If we don't get any significant objections, we will request to transition to LC in a week or so. [manifest] http://w3c.github.io/manifest/ [v2] see goals for v2, https://github.com/w3c/manifest#goals-for-v2-and-beyond [bugs] https://github.com/w3c/manifest/issues -- Marcos Caceres
Re: Extending Mutation Observers to address use cases of
On Feb 12, 2014, at 11:23 AM, Rafael Weinstein rafa...@google.com wrote: I pushed the Web Components folks about exactly this issue (why aren't these callbacks just MutationObservers?) early last year. They convinced me (and I remain convinced) that these signals should be Custom Element callbacks and not Mutation Observer records I’m not arguing that custom elements should use mutation observers or that we should replace custom elements with mutation observers. In a certain sense, you can extend the argument that CE callbacks should be MO records, and you arrive at the conclusion that you don't need Custom Elements at all -- that everything can be implemented with Mutation Observers. But the point of Custom Elements is two fold: 1) To allow implementation of Elements by user-space code in roughly the same model as privileged code. 2) To explain the platform. Put another way: the *implementation* of an element simply needs to be privileged in some respects. For custom elements, this means a) There can only be one. I.e., we don't allow multiple registration of the same element: Primary behavior is the domain of custom elements, secondary behavior is the domain of Mutation Observers b) Callbacks need to fire ASAP. It's important that the implementation of an element get a chance to respond to events before other concerns so that it can create a synchronously consistent abstraction I’m not convinced that only custom elements require a synchronously consistent abstraction. To my mind, Custom Elements callbacks really *should* be fully sync (yes, including firing createdCallback during parse), but various technical and security constraints make that impossible. In short, Custom Elements and Mutation Observers are servicing very different needs. Custom Elements are privileged, but limited and singular (I can only react to changes in myself and I'm the only responding party), while Mutation Observers are unprivileged, pervasive and multiple (I get to respond to anything in the document, and there are likely other parties doing work in the same place I am). The problem I have with this approach is that we’ll then end up with not two but three parallel API for observing DOM mutations that each has its own delivering/dispatching mechanism and timing: mutation events, mutation observers, and custom elements. API fragmentation like that can’t be good for the platform. I think there's an argument to be made that Mutation Observers *should* be extended to allow for observation of trees which include DOM reachable through shadowRoots. The motivation for this would be to allow existing de-coupled concerns to operate faithfully in the presence of custom elements implemented with shadowDOM. The obvious concern here is that de-coupled code may interfere with the implementation of elements, but that's no more true with custom elements than it is today, and shadowRoot is imperatively public, it's consistent to allow MutationObservers to continue to fully observe a document. I think this would be a nice opt-in feature; component authors should be able to choose whether or not to expose its internal DOM in embedding documents. - R. Niwa
Re: Extending Mutation Observers to address use cases of
On Wed, Feb 12, 2014 at 2:08 PM, Ryosuke Niwa rn...@apple.com wrote: On Feb 12, 2014, at 11:23 AM, Rafael Weinstein rafa...@google.com wrote: In a certain sense, you can extend the argument that CE callbacks should be MO records, and you arrive at the conclusion that you don't need Custom Elements at all -- that everything can be implemented with Mutation Observers. But the point of Custom Elements is two fold: 1) To allow implementation of Elements by user-space code in roughly the same model as privileged code. 2) To explain the platform. Put another way: the *implementation* of an element simply needs to be privileged in some respects. For custom elements, this means a) There can only be one. I.e., we don't allow multiple registration of the same element: Primary behavior is the domain of custom elements, secondary behavior is the domain of Mutation Observers b) Callbacks need to fire ASAP. It's important that the implementation of an element get a chance to respond to events before other concerns so that it can create a synchronously consistent abstraction I'm not convinced that only custom elements require a synchronously consistent abstraction. I want to be *very* careful about exposing synchronous callbacks. While I'm sure a lot of authors will ask for it, it's a tremendous footgun. The first type of footgun that it is is that authors can footgun themselves when using these callbacks. For example by accidentally calling into external code while they are in an inconsistent state. Another thing to remember here is that you can only really have one consumer that truly receives synchronous callbacks. The moment you have two observers it means that the second observer doesn't have time to react before the first observer runs. This can be a problem both for the first observer, since it sees a world where the mutation has happened, but where the second observer hasn't had time to react to it, and for the second observer, since the world can have changed under it before it has had a chance to react to a mutation. The second type of footgun is that by adding synchronous callbacks, we are limiting our own ability to extend and optimize the platform since we have to worry about JS callbacks running at inopportune times. So by adding synchronous callbacks, we're footgunning ourselves. To my mind, Custom Elements callbacks really *should* be fully sync (yes, including firing createdCallback during parse), but various technical and security constraints make that impossible. In short, Custom Elements and Mutation Observers are servicing very different needs. Custom Elements are privileged, but limited and singular (I can only react to changes in myself and I'm the only responding party), while Mutation Observers are unprivileged, pervasive and multiple (I get to respond to anything in the document, and there are likely other parties doing work in the same place I am). The problem I have with this approach is that we'll then end up with not two but three parallel API for observing DOM mutations that each has its own delivering/dispatching mechanism and timing: mutation events, mutation observers, and custom elements. API fragmentation like that can't be good for the platform. Hopefully mutation events can go away. In gecko we have since August 2012 warned authors whenever they are used that they are going to be removed. We've gotten very little feedback (none that I know of) that this is a problem. I think there's an argument to be made that Mutation Observers *should* be extended to allow for observation of trees which include DOM reachable through shadowRoots. The motivation for this would be to allow existing de-coupled concerns to operate faithfully in the presence of custom elements implemented with shadowDOM. The obvious concern here is that de-coupled code may interfere with the implementation of elements, but that's no more true with custom elements than it is today, and shadowRoot is imperatively public, it's consistent to allow MutationObservers to continue to fully observe a document. I think this would be a nice opt-in feature; component authors should be able to choose whether or not to expose its internal DOM in embedding documents. For now Mutation Observation never crosses shadow DOM boundaries. When we add that ability I think it needs to be explicitly opted into, and even when explicitly opted into, it should not cross the boundary into private shadow DOM trees. In short, I think this is an orthogonal discussion, and should follow the decision of the open/close debate. This is just one more aspect of how shadow DOM nodes can be exposed and should follow the policies that are used elsewhere. / Jonas
Re: Extending Mutation Observers to address use cases of
On Feb 12, 2014, at 2:33 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Feb 12, 2014 at 2:08 PM, Ryosuke Niwa rn...@apple.com wrote: On Feb 12, 2014, at 11:23 AM, Rafael Weinstein rafa...@google.com wrote: In a certain sense, you can extend the argument that CE callbacks should be MO records, and you arrive at the conclusion that you don't need Custom Elements at all -- that everything can be implemented with Mutation Observers. But the point of Custom Elements is two fold: 1) To allow implementation of Elements by user-space code in roughly the same model as privileged code. 2) To explain the platform. Put another way: the *implementation* of an element simply needs to be privileged in some respects. For custom elements, this means a) There can only be one. I.e., we don't allow multiple registration of the same element: Primary behavior is the domain of custom elements, secondary behavior is the domain of Mutation Observers b) Callbacks need to fire ASAP. It's important that the implementation of an element get a chance to respond to events before other concerns so that it can create a synchronously consistent abstraction I'm not convinced that only custom elements require a synchronously consistent abstraction. I want to be *very* careful about exposing synchronous callbacks. While I'm sure a lot of authors will ask for it, it's a tremendous foot gun. Right. This is why I’d like to know exactly why end-of-microtask synchronicity isn’t sufficient for custom elements. The first type of footgun that it is is that authors can footgun themselves when using these callbacks. For example by accidentally calling into external code while they are in an inconsistent state. Another thing to remember here is that you can only really have one consumer that truly receives synchronous callbacks. The moment you have two observers it means that the second observer doesn't have time to react before the first observer runs. This can be a problem both for the first observer, since it sees a world where the mutation has happened, but where the second observer hasn't had time to react to it, and for the second observer, since the world can have changed under it before it has had a chance to react to a mutation. That is a good point. There is a clear disadvantage in the case of multiple observers observing the same node. The second type of footgun is that by adding synchronous callbacks, we are limiting our own ability to extend and optimize the platform since we have to worry about JS callbacks running at inopportune times. So by adding synchronous callbacks, we're footgunning ourselves. But why is it okay to add semi-synchronous callbacks to custom elements then? I think there's an argument to be made that Mutation Observers *should* be extended to allow for observation of trees which include DOM reachable through shadowRoots. The motivation for this would be to allow existing de-coupled concerns to operate faithfully in the presence of custom elements implemented with shadowDOM. The obvious concern here is that de-coupled code may interfere with the implementation of elements, but that's no more true with custom elements than it is today, and shadowRoot is imperatively public, it's consistent to allow MutationObservers to continue to fully observe a document. I think this would be a nice opt-in feature; component authors should be able to choose whether or not to expose its internal DOM in embedding documents. For now Mutation Observation never crosses shadow DOM boundaries. When we add that ability I think it needs to be explicitly opted into, and even when explicitly opted into, it should not cross the boundary into private shadow DOM trees. In short, I think this is an orthogonal discussion, and should follow the decision of the open/close debate. This is just one more aspect of how shadow DOM nodes can be exposed and should follow the policies that are used elsewhere. I agree. This should be discussed separately. - R. Niwa
[Bug 24638] New: [Shadow]: elementFromPoint should return the host when you hit a Text node
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24638 Bug ID: 24638 Summary: [Shadow]: elementFromPoint should return the host when you hit a Text node Product: WebAppsWG Version: unspecified Hardware: PC OS: All Status: NEW Severity: normal Priority: P2 Component: Component Model Assignee: dglaz...@chromium.org Reporter: espr...@gmail.com QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org Blocks: 14978 Seeing as we don't want to throw an exception when you put a Text node as a child of a ShadowRoot, we should at the very least return the host element so that hit testing in a ShadowRoot that has direct text children does something sensible. We probably need to make it return the host if you hit a border/background color too? (this also makes sense since you can querySelector(:host), so the host really is in the same scope as the ShadowRoot. -- You are receiving this mail because: You are on the CC list for the bug.
[Bug 24639] New: [Shadow]: Each section on the spec needs examples
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24639 Bug ID: 24639 Summary: [Shadow]: Each section on the spec needs examples Product: WebAppsWG Version: unspecified Hardware: PC OS: All Status: NEW Severity: normal Priority: P2 Component: Component Model Assignee: dglaz...@chromium.org Reporter: dglaz...@chromium.org QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org Blocks: 14978 For example: * multiple shadow roots per host translate to the inheritance model in DOM. * what happens when you change the value of the select attribute (show that it's dynamic) -- You are receiving this mail because: You are on the CC list for the bug.
Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)
On Tue, Feb 11, 2014 at 5:16 PM, Maciej Stachowiak m...@apple.com wrote: On Feb 11, 2014, at 4:04 PM, Dimitri Glazkov dglaz...@chromium.org wrote: On Tue, Feb 11, 2014 at 3:50 PM, Maciej Stachowiak m...@apple.com wrote: On Feb 11, 2014, at 3:29 PM, Dimitri Glazkov dglaz...@chromium.org wrote: Dimitri, Maciej, Ryosuke - is there a mutually agreeable solution here? I am exactly sure what problem this thread hopes to raise and whether there is a need for anything other than what is already planned. In the email Ryosuke cited, Tab said something that sounded like a claim that the WG had decided to do public mode only: http://lists.w3.org/Archives/Public/www-style/2014Feb/0221.html Quoting Tab: The decision to do the JS side of Shadow DOM this way was made over a year ago. Here's the relevant thread for the decision: http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/thread.html#msg312 (it's rather long) and a bug tracking it https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562. I can't speak for Ryosuke but when I saw this claim, I was honestly unsure whether there had been a formal WG decision on the matter that I'd missed. I appreciate your clarification that you do not see it that way. Quoting Dmitri again: The plan is, per thread I mentioned above, is to add a flag to createShadowRoot that hides it from DOM traversal APIs and relevant CSS selectors: https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144. That would be great. Can you please prioritize resolving this bug[1]? It has been waiting for a year, and at the time the private/public change was made, it sounded like this would be part of the package. Can you help me understand why you feel this needs to be prioritized? I mean, I don't mind, but it would be great if I had an idea on what's the driving force behind the urgency? (1) It blocks the two dependent issues I mentioned. (2) As a commenter on a W3C spec and member of the relevant WG, I think I am entitled to a reasonably prompt level of response from a spec editor. This bug has been open since November 2012. I think I have waited long enough, and it is fair to ask for some priority now. If it continues to go on, then an outside observer might get the impression that failing to address this bug is deliberate stalling. Personally, I prefer to assume good faith, and I think you have just been super busy. But it would show good faith in return to address the bug soon. Note: as far as I know there is no technical issue or required feedback blocking bug 20144. However, if there is any technical input you need, or if you would find it helpful to have a spec diff provided to use as you see fit, I would be happy to provide such. Please let me know! It seems like there are a few controversies that are gated on having the other mode defined: - Which of the two modes should be the default (if any)? This is re-opening the old year-old discussion, settled in http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg800, right? I'm not sure what you mean by settled. You had a private meeting and the people there agreed on what the default should be. That is fine. Even using that to make a provisional editing decision seems fine. However, I do not believe that makes it settled for purposes of the WG as a whole. In particular, I have chosen not to further debate which mode should be the default until both modes exist, something that I've been waiting on for a while. I don't think that means I lose my right to comment and to have my feedback addressed. In fact, my understanding of the process is this: the WG is required to address any and all feedback that comes in at any point in the process. And an issue is not even settled to the point of requiring explicit reopening unless there is a formal WG decision (as opposed to just an editor's decision based on their own read of input from the WG.) - Should shadow DOM styling primitives be designed so that they can work for private/closed components too? Sure. The beauty of a hidden/closed mode is that it's a special case of the open mode, so we can simply say that if a shadow root is closed, the selectors don't match anything in that tree. I left the comment to that effect on the bug. Right, but that leaves you with no styling mechanism that offers more fine-grained control, suitable for use with closed mode. Advocates of the current styling approach have said we need not consider closed mode at all, because the Web Apps WG has decided on open mode. If what we actually decided is to have both (and that is my understanding of the consensus), then I'd like the specs to reflect that, so the discussion in www-style can be based on facts. As a more basic point, mention of closed mode to exclude it from /shadow most likely has to exist in the shadow styling spec, not just the Shadow DOM spec. So there is a cross-spec
[Bug 24349] [imports]: Import documents should always be in no-quirks mode
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24349 Morrita Hajime morr...@google.com changed: What|Removed |Added Status|NEW |RESOLVED CC||morr...@google.com Resolution|--- |FIXED --- Comment #2 from Morrita Hajime morr...@google.com --- https://github.com/w3c/webcomponents/commit/f7945a427e467285902951336f5b963682e01875 -- You are receiving this mail because: You are on the CC list for the bug.
Re: [manifest] V1 ready for wider review
On Wed, Feb 12, 2014 at 12:06 PM, Marcos Caceres mar...@marcosc.com wrote: The editors of the [manifest] spec have now closed all substantive issues for v1. The spec defines the following: * A link relationship for manifests (so they can be used with link rel=manifest). * A standard file name for a manifest resource (/.well-known/manifest.json). Works the same as /favicon.ico for when link rel=manifest is missing. * The ability to point to a start-url. * Basic screen orientation hinting for when launching a web app. * Launch the app in different display modes: fullscreen, minimal-ui, open in browser, etc. * A way of for scripts to check if the application was launched from a bookmark (i.e., similar to Safari's navigator.standalone). * requestBookmark(), which is a way for a top-level document to request it be bookmarked by the user. To not piss-off users, requires explicit user action to actually work. Expect buttoninstall my app/button everywhere on the Web now :) If you are wondering where some missing feature is, it's probably slated for [v2]. The reason v1 is so small is that it's all we could get agreement on amongst implementers (it's a small set, but it's a good set to kick things off and get us moving... and it's a small spec, so easy to quickly read over). We would appreciate your feedback on this set of features - please file [bugs] on GitHub. We know it doesn't fully realize *the dream* of installable web apps - but it gets us a few steps closer. If we don't get any significant objections, we will request to transition to LC in a week or so. I still think that leaving out name and icons from a manifest about bookmarks is a big mistake. I just made my case here http://lists.w3.org/Archives/Public/www-tag/2014Feb/0039.html Basically I think we need to make the manifest more self sufficient. I think that we're getting Ruby's postulate the wrong way around by making the file that describes the bookmark not contain all the data about the bookmark. Instead the two most important pieces about the bookmark, name and icons, will live in a completely separate HTML file, often with no way to find yourself from the manifest to that separate HTML file. / Jonas
[manifest] name and icons, was Re: [manifest] V1 ready for wider review
On Thursday, February 13, 2014 at 1:21 AM, Jonas Sicking wrote: I still think that leaving out name and icons from a manifest about bookmarks is a big mistake. I just made my case here http://lists.w3.org/Archives/Public/www-tag/2014Feb/0039.html I'll reply separately. Basically I think we need to make the manifest more self sufficient. I think that we're getting Ruby's postulate the wrong way around by making the file that describes the bookmark not contain all the data about the bookmark. Instead the two most important pieces about the bookmark, name and icons, will live in a completely separate HTML file, often with no way to find yourself from the manifest to that separate HTML file. I still think that icons and name are outside the scope of V1 - but I've added them to V2. The whole manifest and icon updating mechanism you describe in your email to the TAG adds a bunch of complexity (yes, we need to deal with it eventually as it's an extremely valid use case - but we can defer it to HTML at this moment and for a few months... even if UAs don't do updating of icons and name from HTML). I still hold that we should get the most critical and least controversial functionality (display mode, default-orientation, and start-url) standardized before we do the other stuff. It also gives a chance for UA's to catch up and implement HTML's application-name and link rel=icon properly. UAs are going to need to support those HTML features to work with apps that don't make use of manifests. And apps the use manifests will work just fine till we add proper support for name and icons into the manifest - all web apps will need to include application-name and link rel=icon (as well as a bunch of proprietary stuff!) to target todays and yesterdays UAs regardless. So, IMHO, there is not much to be won by putting name and icons into V1 for implementers or for developers at this moment. I would go as far as to say that it's initially harmful to have name and icon in V1 because it discourages UAs from fixing their support for application-name and link rel=icon. Having the fallback behavior explicitly tested in V1 of the manifest may help improve support for those features of HTML. So, I'm not saying let's never do name and icon - I'm saying let just do the easy stuff we have some agreement on first.