Re: [IndexedDB] Current editor's draft
On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote: On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com wrote: Hi folks, There are several unimplemented proposals on strengthening and expanding IndexedDB. The reason I have not implemented them yet is because I am not convinced they are necessary in toto. Here's my attempt at explaining why. I apologize in advance for not responding to individual proposals due to personal time constraints. I will however respond in detail on individual bug reports, e.g., as I did with 9975. I used the current editor's draft asynchronous API to understand where some of the remaining programming difficulties remain. Based on this attempt, I find several areas to strengthen, the most prominent of which is how we use transactions. Another is to add the concept of a catalog as a special kind of object store. Hi Nikunj, Thanks for replying! I'm very interested in getting this stuff sorted out pretty quickly as almost all other proposals in one way or another are affected by how this stuff develops. Here are the main areas I propose to address in the editor's spec: 1. It is time to separate the dynamic and static scope transaction creation so that they are asynchronous and synchronous respectively. I don't really understand what this means. What are dynamic and static scope transaction creation? Can you elaborate? This is the difference in the API in my email between openTransaction and transaction. Dynamic and static scope have been defined in the spec for a long time. Ah, I think I'm following you now. I'm actually not sure that we should have dynamic scope at all in the spec, I know Jeremy has expressed similar concerns. However if we are going to have dynamic scope, I agree it is a good idea to have separate APIs for starting dynamic-scope transactions from static-scope transactions. 2. Provide a catalog object that can be used to atomically add/remove object stores and indexes as well as modify version. It seems to me that a catalog object doesn't really provide any functionality over the proposal in bug 10052? The advantage that I see with the syntax proposal in bug 10052 is that it is simpler. http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052 Can you elaborate on what the advantages are of catalog objects? To begin with, 10052 shuts down the users of the database completely when only one is changing its structure, i.e., adding or removing an object store. This is not the case. Check the steps defined for setVersion in [1]. At no point are databases shut down automatically. Only once all existing database connections are manually closed, either by calls to IDBDatabase.close() or by the user leaving the page, is the 'success' event from setVersion fired. [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0 How can we make it less draconian? The 'versionchange' event allows pages that are currently using the database to handle the change. The page can inspect the new version number supplied by the 'versionchange' event, and if it knows that it is compatible with a given upgrade, all it needs to do is to call db.close() and then immediately reopen the database using indexedDB.open(). The open call won't complete until the upgrade is finished. Secondly, I don't see how that approach can produce atomic changes to the database. When the transaction created in step 4 of setVersion defined in [1] is created, only one IDBDatabase object to the database is open. As long as that transaction is running, no requests returned from IDBFactory.open will receive a 'success' event. Only once the transaction is committed, or aborted, will those requests succeed. This guarantees that no other IDBDatabase object can see a partial update. Further, only once the transaction created by setVersion is committed, are the requested objectStores and indexes created/removed. This guarantees that the database is never left with a partial update. That means that the changes are atomic, right? Thirdly, we shouldn't need to change version in order to perform database changes. First off, note that if the upgrade is compatible, you can just pass the existing database version to setVersion. So no version *change* is actually needed. Second, I don't think there is much difference between var txn = db.transaction(); db.openCatalog(txn).onsuccess = ... vs db.setVersion(5).onsuccess = ... I don't see that telling people that they have to use the former is a big win. The problem that I see with the catalog proposal, if I understand it correctly, is that it means that a page that has a IDBDatabase object open has to always be prepared for calls to openObjectStore/openTransaction failing. I.e. the page can't ever know that another page was opened which at any point created a catalog and removed an objectStore. This
[widgets] Draft agenda for 8 July 2010 voice conf
Below is the draft agenda for the July 8 Widgets Voice Conference (VC). Inputs and discussion before the VC on all of the agenda topics via public-webapps is encouraged (as it can result in a shortened meeting). Please address Open/Raised Issues and Open Actions before the meeting: http://www.w3.org/2008/webapps/track/products/8 Minutes from the last VC: http://www.w3.org/2010/07/01-wam-minutes.html -Art Barstow Agenda: 1. Review and tweak agenda 2. Announcements 3. Packaging and Configuration spec http://dev.w3.org/2006/waf/widgets/ a. Issue-117 In Widget PC Spec, need to clarify in the spec that dir attribute does not apply to attributes that are IRIs, Numeric, Keywords, etc. The dir attribute only affects human readable strings. http://www.w3.org/2008/webapps/track/issues/117 Marcos' request to I18N WG for feedback: http://lists.w3.org/Archives/Public/public-webapps/2010JulSep/0041.html 4. Widget Interface spec http://dev.w3.org/2006/waf/widgets-api/ a. Issue-116 Need to flesh out the security considerations for the openURL method in the Widget Interface spec: http://www.w3.org/2008/webapps/track/issues/116 Marcos' proposed resolution to Issue-116 - remove openURL from the spec: http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/1229.html 5. URI Scheme spec http://dev.w3.org/cvsweb/2006/waf/widgets-uri/ a. Status and Open Actions: http://www.w3.org/2008/webapps/track/actions/526 - Widget URI scheme: define the widget *URI* syntax in terms of RFC 3986 http://www.w3.org/2008/webapps/track/actions/551 - Add requirements to Widget URIs based on what's in the requirements document 6. AOB Logistics: Time: 22:00 Tokyo; 16:00 Helsinki; 15:00 Paris; 14:00 London; 09:00 Boston; 06:00 Seattle Duration: 60 minutes max Zakim Bridge:+1.617.761.6200, +33.4.89.06.34.99 or +44.117.370.6152 PIN: 9231 (WAF1); IRC: channel = #wam; irc://irc.w3.org:6665 ; http://cgi.w3.org/member-bin/irc/irc.cgi Confidentiality of minutes: Public
[widgets] viewmodes parsing tests
Hi, During testing, Opera's QA discovered that we were missing test for the viewmodes attribute. Opera is contributing the following tests to the test suite: http://dev.w3.org/2006/waf/widgets/test-suite/test-cases/ta-viewmodes/ I've added them to the test-suite file, and now appear in both the implementation report and test-suite. Kind regards, Marcos -- Marcos Caceres Opera Software
Re: Web Messaging status
On Wed, 7 Jul 2010, Arthur Barstow wrote: Hixie - since Web Messaging [1] is now in WebApps' charter, would you please provide a short status of that spec? Not really much to report; it's at the same stage as the rest of HTML5. I'll have a WD ready along with all the LCs in a few weeks. Right now I'm prioritising the captions support for HTML video, after which I have some high-priority Web Sockets work and some work on the HTML5 parsing rules to do. My fourth priority is then to get the LC drafts and Web Messaging in order for publication. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: question about number of occurrences of author and content elements (in Widget packaging spec)
hallo Marcos (and sorry for the confusion in copying groups) I think the clarifications below should be fine. We are using the W3C tests but just wanted to be sure we were interpreting the test cases in the proper way Thanks for your help Saludos! --- ricardo On Fri, Jul 2, 2010 at 11:00 AM, Marcos Caceres marc...@opera.com wrote: Hi Ricardo, (moving discussion to public-webapps) On 7/2/10 5:56 AM, Ricardo Varela wrote: hallo all, hallo Marcos, We have a small question regarding what we interpret may be an inconsistency in the behaviours for parsing a config file as commented in the W3C widget packaging spec [1] According to the spec (latest and also older versions), the occurrences of some elements (eg: author or content) have to be zero or one I'm sorry, the specification is unclear. It says expected children (in any order), but it certainly is not intended to be a restriction on authors - that is to say, it would make no sense to punish authors who put in two author elements by mistake. A conformance checker could then warn if something out of the expected (such as two author elements) if found in the document. This is defined in this yet to be published spec: http://dev.w3.org/2006/waf/widgets-pc-cc/Overview.src.html However, on the algorithm to process a configuration document quoted below, it states: If this is not the first author element encountered, then the user agent must ignore this element and any child nodes It just says ignore and doesn't say to consider it as error Isn't this a contradiction in the parsing of the configuration document? We understand that it should be one of these 2 cases: a) we allow for more than one instance of author and content and let the first one take precedence (and therefore the occurrences should be zero or more) No, only one is expected. b) we allow only one instance of author and content elements (and therefore the parsing algorithm has got to stop with error on further occurrences) Certainly not: the parser is not a conformance checker. The parser should be able to flexibly handle all garbage input gracefully, as well as be future compatible (in case we want to allow more than one author or content element on the future). Would appreciate some clarification about this, as we want to clarify what to do for our compliance tests I hope that clarifies things. If not, I'm happy to discuss further. Also, are you making your own compliance tests or using the official ones?: http://dev.w3.org/2006/waf/widgets/test-suite/ Thanks a lot in advance! Saludos! [1] http://www.w3.org/TR/widgets/ -- Marcos Caceres Opera Software -- Ricardo Varela - http://phobeo.com - http://twitter.com/phobeo Though this be madness, yet there's method in 't
Re: question about number of occurrences of author and content elements (in Widget packaging spec)
Ok, please let me know if you need me to clarify anything in the spec. I'm happy to help where I can. Please also note that I checked in a bunch of tests relating to viewmodes today. Kind regards, Marcos On 7/7/10 6:39 PM, Ricardo Varela wrote: hallo Marcos (and sorry for the confusion in copying groups) I think the clarifications below should be fine. We are using the W3C tests but just wanted to be sure we were interpreting the test cases in the proper way Thanks for your help Saludos! --- ricardo On Fri, Jul 2, 2010 at 11:00 AM, Marcos Caceresmarc...@opera.com wrote: Hi Ricardo, (moving discussion to public-webapps) On 7/2/10 5:56 AM, Ricardo Varela wrote: hallo all, hallo Marcos, We have a small question regarding what we interpret may be an inconsistency in the behaviours for parsing a config file as commented in the W3C widget packaging spec [1] According to the spec (latest and also older versions), the occurrences of some elements (eg: author or content) have to be zero or one I'm sorry, the specification is unclear. It says expected children (in any order), but it certainly is not intended to be a restriction on authors - that is to say, it would make no sense to punish authors who put in two author elements by mistake. A conformance checker could then warn if something out of the expected (such as two author elements) if found in the document. This is defined in this yet to be published spec: http://dev.w3.org/2006/waf/widgets-pc-cc/Overview.src.html However, on the algorithm to process a configuration document quoted below, it states: If this is not the first author element encountered, then the user agent must ignore this element and any child nodes It just says ignore and doesn't say to consider it as error Isn't this a contradiction in the parsing of the configuration document? We understand that it should be one of these 2 cases: a) we allow for more than one instance of author and content and let the first one take precedence (and therefore the occurrences should be zero or more) No, only one is expected. b) we allow only one instance of author and content elements (and therefore the parsing algorithm has got to stop with error on further occurrences) Certainly not: the parser is not a conformance checker. The parser should be able to flexibly handle all garbage input gracefully, as well as be future compatible (in case we want to allow more than one author or content element on the future). Would appreciate some clarification about this, as we want to clarify what to do for our compliance tests I hope that clarifies things. If not, I'm happy to discuss further. Also, are you making your own compliance tests or using the official ones?: http://dev.w3.org/2006/waf/widgets/test-suite/ Thanks a lot in advance! Saludos! [1] http://www.w3.org/TR/widgets/ -- Marcos Caceres Opera Software -- Marcos Caceres Opera Software
Re: [IndexedDB] Current editor's draft
On Wed, Jul 7, 2010 at 8:27 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote: On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com wrote: Hi folks, There are several unimplemented proposals on strengthening and expanding IndexedDB. The reason I have not implemented them yet is because I am not convinced they are necessary in toto. Here's my attempt at explaining why. I apologize in advance for not responding to individual proposals due to personal time constraints. I will however respond in detail on individual bug reports, e.g., as I did with 9975. I used the current editor's draft asynchronous API to understand where some of the remaining programming difficulties remain. Based on this attempt, I find several areas to strengthen, the most prominent of which is how we use transactions. Another is to add the concept of a catalog as a special kind of object store. Hi Nikunj, Thanks for replying! I'm very interested in getting this stuff sorted out pretty quickly as almost all other proposals in one way or another are affected by how this stuff develops. Here are the main areas I propose to address in the editor's spec: 1. It is time to separate the dynamic and static scope transaction creation so that they are asynchronous and synchronous respectively. I don't really understand what this means. What are dynamic and static scope transaction creation? Can you elaborate? This is the difference in the API in my email between openTransaction and transaction. Dynamic and static scope have been defined in the spec for a long time. In fact, dynamic transactions aren't explicitly specified anywhere. They are just mentioned. You need some amount of guessing to find out what they are or how to create one (i.e. pass an empty list of store names). Ah, I think I'm following you now. I'm actually not sure that we should have dynamic scope at all in the spec, I know Jeremy has expressed similar concerns. However if we are going to have dynamic scope, I agree it is a good idea to have separate APIs for starting dynamic-scope transactions from static-scope transactions. I think it would simplify matters a lot if we were to drop dynamic transactions altogether. And if we do that, then we can also safely move the 'mode' to parameter to the Transaction interface, since all the object stores in a static transaction can be only be open in the same mode. 2. Provide a catalog object that can be used to atomically add/remove object stores and indexes as well as modify version. It seems to me that a catalog object doesn't really provide any functionality over the proposal in bug 10052? The advantage that I see with the syntax proposal in bug 10052 is that it is simpler. http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052 Can you elaborate on what the advantages are of catalog objects? To begin with, 10052 shuts down the users of the database completely when only one is changing its structure, i.e., adding or removing an object store. This is not the case. Check the steps defined for setVersion in [1]. At no point are databases shut down automatically. Only once all existing database connections are manually closed, either by calls to IDBDatabase.close() or by the user leaving the page, is the 'success' event from setVersion fired. [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0 How can we make it less draconian? The 'versionchange' event allows pages that are currently using the database to handle the change. The page can inspect the new version number supplied by the 'versionchange' event, and if it knows that it is compatible with a given upgrade, all it needs to do is to call db.close() and then immediately reopen the database using indexedDB.open(). The open call won't complete until the upgrade is finished. I had a question here: why does the page need to call 'close'? Any pending transactions will run to completion and new ones should not be allowed to start if a VERSION_CHANGE transaction is waiting to start. From the description of what 'close' does in 10052, I am not entirely sure it is needed. Secondly, I don't see how that approach can produce atomic changes to the database. When the transaction created in step 4 of setVersion defined in [1] is created, only one IDBDatabase object to the database is open. As long as that transaction is running, no requests returned from IDBFactory.open will receive a 'success' event. Only once the transaction is committed, or aborted, will those requests succeed. This guarantees that no other IDBDatabase object can see a partial update. Further, only once the transaction created by setVersion is committed, are the requested objectStores and indexes
Re: [IndexedDB] Current editor's draft
On Wed, Jul 7, 2010 at 10:41 AM, Andrei Popescu andr...@google.com wrote: On Wed, Jul 7, 2010 at 8:27 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote: On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com wrote: Hi folks, There are several unimplemented proposals on strengthening and expanding IndexedDB. The reason I have not implemented them yet is because I am not convinced they are necessary in toto. Here's my attempt at explaining why. I apologize in advance for not responding to individual proposals due to personal time constraints. I will however respond in detail on individual bug reports, e.g., as I did with 9975. I used the current editor's draft asynchronous API to understand where some of the remaining programming difficulties remain. Based on this attempt, I find several areas to strengthen, the most prominent of which is how we use transactions. Another is to add the concept of a catalog as a special kind of object store. Hi Nikunj, Thanks for replying! I'm very interested in getting this stuff sorted out pretty quickly as almost all other proposals in one way or another are affected by how this stuff develops. Here are the main areas I propose to address in the editor's spec: 1. It is time to separate the dynamic and static scope transaction creation so that they are asynchronous and synchronous respectively. I don't really understand what this means. What are dynamic and static scope transaction creation? Can you elaborate? This is the difference in the API in my email between openTransaction and transaction. Dynamic and static scope have been defined in the spec for a long time. In fact, dynamic transactions aren't explicitly specified anywhere. They are just mentioned. You need some amount of guessing to find out what they are or how to create one (i.e. pass an empty list of store names). Yes, that has been a big problem for us too. Ah, I think I'm following you now. I'm actually not sure that we should have dynamic scope at all in the spec, I know Jeremy has expressed similar concerns. However if we are going to have dynamic scope, I agree it is a good idea to have separate APIs for starting dynamic-scope transactions from static-scope transactions. I think it would simplify matters a lot if we were to drop dynamic transactions altogether. And if we do that, then we can also safely move the 'mode' to parameter to the Transaction interface, since all the object stores in a static transaction can be only be open in the same mode. Agreed. 2. Provide a catalog object that can be used to atomically add/remove object stores and indexes as well as modify version. It seems to me that a catalog object doesn't really provide any functionality over the proposal in bug 10052? The advantage that I see with the syntax proposal in bug 10052 is that it is simpler. http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052 Can you elaborate on what the advantages are of catalog objects? To begin with, 10052 shuts down the users of the database completely when only one is changing its structure, i.e., adding or removing an object store. This is not the case. Check the steps defined for setVersion in [1]. At no point are databases shut down automatically. Only once all existing database connections are manually closed, either by calls to IDBDatabase.close() or by the user leaving the page, is the 'success' event from setVersion fired. [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0 How can we make it less draconian? The 'versionchange' event allows pages that are currently using the database to handle the change. The page can inspect the new version number supplied by the 'versionchange' event, and if it knows that it is compatible with a given upgrade, all it needs to do is to call db.close() and then immediately reopen the database using indexedDB.open(). The open call won't complete until the upgrade is finished. I had a question here: why does the page need to call 'close'? Any pending transactions will run to completion and new ones should not be allowed to start if a VERSION_CHANGE transaction is waiting to start. From the description of what 'close' does in 10052, I am not entirely sure it is needed. The problem we're trying to solve is this: Imagine an editor which stores documents in indexedDB. However in order to not overwrite the document using temporary changes, it only saves data when the user explicitly requests it, for example by pressing a 'save' button. This means that there can be a bunch of potentially important data living outside of indexedDB, in other parts of the application, such as in textfields and javascript variables. If we were to automatically
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
On Wed, Jul 7, 2010 at 1:28 AM, Anne van Kesteren ann...@opera.com wrote: On Fri, 02 Jul 2010 23:05:41 +0200, Charlie Reis cr...@chromium.org wrote: Hi all-- I'm trying to understand one of the example use cases in the CORS specification and how the various rules about credentials apply, and I'm wondering whether there's an issue to resolve. In the Not tainting the canvas element example at http://dev.w3.org/2006/waf/access-control/#use-cases, it looks like the images will be requested from http://narwhalart.example using img tags. If so, it's possible the user agent will send cookies on the GET request for the images. If I understand correctly, that implies that the HTTP response would have to include Access-Control-Allow-Credentials: true, because cookies are considered credentials. However, I also see that providing Access-Control-Allow-Credentials: true means that * cannot be used for Access-Control-Allow-Origin. The use case mentions that the server could make the images accessible to all origins, though. Right. The server would have to know the origin of the request for that to work given the current constraints in the CORS specification. The current constraints are there as at least one implementor was afraid it would otherwise be to easy to configure the server in such a way as to reveal confidential information. Is the server allowed to omit the Access-Control-Allow-Credentials header and use * for Access-Control-Allow-Origin, despite the presence of cookies on the image's GET request? Not per CORS. In theory HTML5 could phrase the requirements around img fetching to be different, but that does not seem like a good idea. Also, what is the reason that * is not allowed for responses that allow credentials? I've seen it documented in several places, but I'm not sure why that's the case. In cases like images or perhaps web fonts, it seems impractical to prevent credentials from being sent (unlike XmlHttpRequests). See above. On a similar note, are the image's GET requests required to carry Origin HTTP headers? They are required to carry an Origin header but the current requirements also indicate that the header will just give null rather than an origin. That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie
[Reminder]: Last Call Working Drafts transition announcement of the API and Ontology for Media Resource 1.0
This is a *reminder* for the Last Call Working Draft transition announcement for * API for Media Resource 1.0 * Ontology for Media Resource 1.0 The Last Call period for these documents ends on July 11, 2010. However if you plan to send your review late, please do so ASAP. The MAWG needs to work on your comments and plans to resolve all the comments during its F2F meeting early september. Cheers, Thierry Le 09/06/2010 09:22, Thierry MICHEL a écrit : Chairs and Team Contact, (1) This is a Last Call Working Draft transition announcement for the following two Recommendation Track specifications: (2) Document Titles and URIs * API for Media Resource 1.0 http://www.w3.org/TR/2010/WD-mediaont-api-1.0-20100608 * Ontology for Media Resource 1.0 http://www.w3.org/TR/2010/WD-mediaont-10-20100608 (3) Instructions for providing feedback If you wish to make comments regarding these specifications please send them to public-media-annotat...@w3.org which is an email list publicly archived at http://lists.w3.org/Archives/Public/public-media-annotation/ Please use [LC Comment API] or [LC Comment ONT]in the subject line of your email, regarding the specification you are commenting. (4) Review end date The Last Call period for these documents ends on July 11, 2010. (5) A reference to the group's decision to make this transition The Media Annotations Working Group made the decision for this transition at its teleconference on 01 June 2010 resolution: both documents can be moved to lc Resolution: API and Onthology moving LC see http://www.w3.org/2010/06/01-mediaann-minutes.html (6) Evidence that the document satisfies group's requirements. Include a link to requirements The Media Annotations Working Group believes that these specifications satisfy the requirements of the working group's charter at http://www.w3.org/2008/01/media-annotations-wg.html and the Use Cases and Requirements for Ontology and API for Media Resource 1.0 at http://www.w3.org/TR/2010/WD-media-annot-reqs-20100121/ (7) The names of groups with dependencies, explicitly inviting review from them. The following groups are known or suspected to have dependencies on one or more of these specifications: * Semantic Web Deployment Working Group * Semantic Web Coordination Group * Scalable Vector Graphics Working Group (SVG) * Web Applications (WebApps) Working Group * HyperText Markup Language (HTML) Working Group * The Device API and Policy (DAP) Working Group also the following groups have liaisons on one or more of these specifications: * Protocol for Web Description Resources (POWDER) Working Group * Protocols and Formats Working Group The Media Annotations Working Group requests review from each of these working groups. The chairs of the working group listed have been copied on the distribution list of this transition announcement as well as other individuals known to have expressed prior interest. (8) Report of any Formal Objections The Working Group received no Formal Objection during the preparation of these specifications. (9) Patent Disclosure Page Link can be found at http://www.w3.org/2004/01/pp-impl/42786/status This Transition Announcement has been prepared according to the guidelines concerning such announcements at http://www.w3.org/2005/08/online_xslt/xslt?xmlfile=http://www.w3.org/2005/08/01-transitions.htmlxslfile=http://www.w3.org/2005/08/transitions.xsldocstatus=lc-wd-tr#trans-annc Regards, Thierry Michel (on behalf of the Media Annotations Working Group chairs) Team Contact for the Media Annotations WG.
Reminder: RfC: LCWD of API and Ontology for Media Resource 1.0; deadline 11 July 2010
The Media Annotations WG asked WebApps to review two of their LCWDs by July 11. Details below including the mail list for comments. -Art Barstow On 6/9/10 9:11 AM, Barstow Art (Nokia-CIC/Boston) wrote: All - the Media Annotations WG asked WebApps to review two of their LCWDs. Details below including the mail list for comments (deadline for comments is July 11). -Art Barstow Original Message Subject:Last Call Working Drafts transition announcement of the API and Ontology for Media Resource 1.0 Date: Wed, 9 Jun 2010 09:22:39 +0200 From: ext Thierry MICHELtmic...@w3.org CC: Chairs and Team Contact, (1) This is a Last Call Working Draft transition announcement for the following two Recommendation Track specifications: (2) Document Titles and URIs * API for Media Resource 1.0 http://www.w3.org/TR/2010/WD-mediaont-api-1.0-20100608 * Ontology for Media Resource 1.0 http://www.w3.org/TR/2010/WD-mediaont-10-20100608 (3) Instructions for providing feedback If you wish to make comments regarding these specifications please send them topublic-media-annotat...@w3.org which is an email list publicly archived at http://lists.w3.org/Archives/Public/public-media-annotation/ Please use [LC Comment API] or [LC Comment ONT]in the subject line of your email, regarding the specification you are commenting. (4) Review end date The Last Call period for these documents ends on July 11, 2010.
Re: [IndexedDB] Current editor's draft
On 7/6/2010 6:31 PM, Nikunj Mehta wrote: To begin with, 10052 shuts down the users of the database completely when only one is changing its structure, i.e., adding or removing an object store. How can we make it less draconian? Secondly, I don't see how that approach can produce atomic changes to the database. Thirdly, we shouldn't need to change version in order to perform database changes. Finally, I am not sure why you consider the syntax proposal simpler. Note that I am not averse to the version change event notification. In what use case would you want to change the database structure without modifying the version? That almost seems like a footgun for consumers. Cheers, Shawn smime.p7s Description: S/MIME Cryptographic Signature
Re: [IndexedDB] Callback order
On Thu, Jun 24, 2010 at 4:40 AM, Jeremy Orlow jor...@chromium.org wrote: On Sat, Jun 19, 2010 at 9:12 AM, Jonas Sicking jo...@sicking.cc wrote: On Fri, Jun 18, 2010 at 7:46 PM, Jeremy Orlow jor...@chromium.org wrote: On Fri, Jun 18, 2010 at 7:24 PM, Jonas Sicking jo...@sicking.cc wrote: On Fri, Jun 18, 2010 at 7:01 PM, Jeremy Orlow jor...@chromium.org wrote: I think determinism is most important for the reasons you cited. I think advanced, performance concerned apps could deal with either semantics you mentioned, so the key would be to pick whatever is best for the normal case. I'm leaning towards thinking firing in order is the best way to go because it's the most intuitive/easiest to understand, but I don't feel strongly about anything other than being deterministic. I definitely agree that firing in request order is the simplest, both from an implementation and usage point of view. However my concern is that we'd lose most of the performance benefits that cursors provide if we use that solution. What do you mean with apps could deal with either semantics? You mean that they could deal with the cursor case by simply being slower, or do you mean that they could work around the performance hit somehow? Hm. I was thinking they could save the value, call continue, then do work on it, but that'd of course only defer the slowdown for one iteration. So I guess they'd have to store up a bunch of data and then make calls on it. Indeed which could be bad for memory footprint. Of course, they'll run into all of these same issues with the sync API since things are of course done in order. So maybe trying to optimize this specific case for just the async API is silly? I honestly haven't looked at the sync API. But yes, I assume that it will in general have to serialize all calls into the database and thus generally not be as performant. I don't think that is a good reason to make the async API slower too though. But it's entirely possible that I'm overly concerned about cursor performance in general though. I won't argue too strongly that we need to prioritize cursor callback events until I've seen some numbers. If we want to simply define that callbacks fire in request order for now then that is fine with me. Yeah, I think we should get some hard numbers and think carefully about this before we make things even more complicated/nuanced. I ran some tests. Note that the test implementation is an approximation. It's both somewhat optimistic in that it doesn't make the extra effort to ensure that cursor callbacks always run before other callbacks. But it's also somewhat pessimistic in that it always returns to the main event loop, even though that is often not needed. My guess is that in the end it's a pretty close approximation performance wise. I've attached the testcase I used in case anyone want to play around with it. It contains a fair amount of mozilla specific features (generators are awesome for asynchronous callbacks) as well as is written to the IndexedDB API that we currently have implemented, but it should be portable to other browsers. For the currently proposed solution, of always running requests in the order they are made, including requests coming from cursor.continue(), gives the following results: Plain iteration over 1 entries using cursor: 2400ms Iteration over 1 entries using cursor, performing a join by for each iteration call getAll on an index: 5400ms For the proposed solution of prioritizing cursor.continue() callbacks over other callbacks: Plain iteration over 1 entries using cursor: 1050ms Iteration over 1 entries using cursor, performing a join by for each iteration call getAll on an index: 1280ms The reason that just plain iteration got faster is that we implemented the strict ordering by sending all requests to the thread the database runs on, and then having the database thread process all requests in order and send them back to the requesting thread. So for plain iteration it basically just means a roundtrip to the indexedDB thread and back. Based on these numbers, I think we should prioritize IDBCursor.continue() callbacks as for join example this results in a over 4x speedup. / Jonas setup DB run test
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
Because it's undesirable to prevent the browser from sending cookies on an img request, Why ? I can understand why you can't do it today - but why is this undesirable even for new applications? Ad tracking ? ~devdatta On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
On Wed, Jul 7, 2010 at 4:14 PM, Devdatta Akhawe dev.akh...@gmail.comwrote: Because it's undesirable to prevent the browser from sending cookies on an img request, Why ? I can understand why you can't do it today - but why is this undesirable even for new applications? Ad tracking ? ~devdatta I meant undesirable in that it will require much deeper changes to browsers. I wouldn't mind making it possible to request an image or other subresource without cookies, but I don't think there's currently a mechanism for that, is there? And if there's consensus that user agents shouldn't send cookies at all on third party subresources, I'm ok with that, but I imagine there would be pushback on that sort of proposal-- it would likely affect compatibility with existing web sites. I haven't gathered any data on it, though. The benefit to allowing * with credentials is that it lets CORS work with the existing browser request logic for images and other subresources, where cookies are currently sent with the request. Charlie On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
hmm, I think I quoted the wrong part of your email. I wanted to ask why would it be undesirable to make CORS GET requests cookie-less. It seems the argument here is reduction of implementation work. Is this the only one? Note that even AnonXmlHttpRequest intends to make GET requests cookie-less. Regards devdatta I meant undesirable in that it will require much deeper changes to browsers. I wouldn't mind making it possible to request an image or other subresource without cookies, but I don't think there's currently a mechanism for that, is there? And if there's consensus that user agents shouldn't send cookies at all on third party subresources, I'm ok with that, but I imagine there would be pushback on that sort of proposal-- it would likely affect compatibility with existing web sites. I haven't gathered any data on it, though. The benefit to allowing * with credentials is that it lets CORS work with the existing browser request logic for images and other subresources, where cookies are currently sent with the request. Charlie On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
It's not just implementation effort-- as I mentioned, it's potentially a compatibility question. If you are proposing not sending cookies on any cross-origin images (or other potential candidates for CORS), do you have any data about which sites that might affect? Personally, I would love to see cross-origin subresource requests change to not using cookies, but that could break existing web sites that include subresources from partner sites, etc. Is there a proposal or discussion about this somewhere? In the mean time, the canvas tainting example in the spec seems difficult to achieve. Charlie On Wed, Jul 7, 2010 at 5:05 PM, Devdatta Akhawe dev.akh...@gmail.comwrote: hmm, I think I quoted the wrong part of your email. I wanted to ask why would it be undesirable to make CORS GET requests cookie-less. It seems the argument here is reduction of implementation work. Is this the only one? Note that even AnonXmlHttpRequest intends to make GET requests cookie-less. Regards devdatta I meant undesirable in that it will require much deeper changes to browsers. I wouldn't mind making it possible to request an image or other subresource without cookies, but I don't think there's currently a mechanism for that, is there? And if there's consensus that user agents shouldn't send cookies at all on third party subresources, I'm ok with that, but I imagine there would be pushback on that sort of proposal-- it would likely affect compatibility with existing web sites. I haven't gathered any data on it, though. The benefit to allowing * with credentials is that it lets CORS work with the existing browser request logic for images and other subresources, where cookies are currently sent with the request. Charlie On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: [IndexedDB] Current editor's draft
On 7/7/2010 12:27 AM, Jonas Sicking wrote: This interface allows asynchronously requesting more objectStores to be locked. The author must take care whenever calling openObjectStores that the request might fail due to deadlocks. But as previously stated, I think this adds too much complexity and too much racyness to the API. And so I'd prefer to not add this. I feel like we should not be creating an API that allows for deadlocks to happen. Especially with an API that allows for races to happen (which we have) such that it will be hard for web developers to test to ensure they do not have deadlocks. Cheers, Shawn smime.p7s Description: S/MIME Cryptographic Signature
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
It's not just implementation effort-- as I mentioned, it's potentially a compatibility question. If you are proposing not sending cookies on any cross-origin images (or other potential candidates for CORS), do you have any data about which sites that might affect? Its not clear to me on how it would affect sites. It would be like the user cleared his cache and made a request. regards devdatta Personally, I would love to see cross-origin subresource requests change to not using cookies, but that could break existing web sites that include subresources from partner sites, etc. Is there a proposal or discussion about this somewhere? In the mean time, the canvas tainting example in the spec seems difficult to achieve. Charlie On Wed, Jul 7, 2010 at 5:05 PM, Devdatta Akhawe dev.akh...@gmail.com wrote: hmm, I think I quoted the wrong part of your email. I wanted to ask why would it be undesirable to make CORS GET requests cookie-less. It seems the argument here is reduction of implementation work. Is this the only one? Note that even AnonXmlHttpRequest intends to make GET requests cookie-less. Regards devdatta I meant undesirable in that it will require much deeper changes to browsers. I wouldn't mind making it possible to request an image or other subresource without cookies, but I don't think there's currently a mechanism for that, is there? And if there's consensus that user agents shouldn't send cookies at all on third party subresources, I'm ok with that, but I imagine there would be pushback on that sort of proposal-- it would likely affect compatibility with existing web sites. I haven't gathered any data on it, though. The benefit to allowing * with credentials is that it lets CORS work with the existing browser request logic for images and other subresources, where cookies are currently sent with the request. Charlie On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
New draft of FileSystem API posted
I've posted a new draft of File API: Directories and System [1]. In this draft I've rolled in quite a bit of feedback that I received since first posting it on DAP--many apologies for the delay. This is the first draft produced since we agreed to move this spec from DAP to WebApps; I hope those of you who have time will give it a look and let me know what you think. In general I've tried to address any comment I was sent and had not already addressed via email. The few that didn't make it in, I've responded to below. My thanks to Robin Berjon and Mike Clement for all their feedback. Robin: - data stored there by the application should not be deleted by the UA without user intervention, UA should require permission from the user, The application may of course delete it at will - these sound like real conformance statements, therefore SHOULD, SHOULD NOT, and MAY. Those are in a non-normative section; is that language still appropriate there? Robin: [discussion about speccing the URI format] Left as an open issue. Mike: [discussion about multiple sandboxes per origin] I think that would be very easy and clean to add later if desired, and in the mean time, one can use subdirectories. Mike: getFile/getDirectory are a bit overloaded. How about including methods like exists(), createFile() and createDirectory()? Though these methods are easily implemented in terms of getFile/getDirectory, I always prefer more direct API methods that help make the code easier to understand. I expect, though, that you are attempting to be a low level as possible here. As Robin pointed out, adding extra round-trips will slow things down. Also, it can encourage race conditions. These are easy for libraries to implement via wrappers. Mike: [request for creation time in getMetadata] It may be hard to support reliably cross-platform [2]. Robin: [specifying a single locale everywhere] I don't think that'll make folks very happy if it's not their locale. If I e.g. try to force my locale on Turkish Windows users, they're going to see some interesting errors trying to share files with apps outside the browser, or for that matter even saving certain groups of files from inside the browser. Eric [1] http://dev.w3.org/2009/dap/file-system/file-dir-sys.html [2] http://en.wikipedia.org/wiki/MAC_times
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
On Wed, Jul 7, 2010 at 5:53 PM, Charlie Reis cr...@chromium.org wrote: It's not just implementation effort-- as I mentioned, it's potentially a compatibility question. If you are proposing not sending cookies on any cross-origin images (or other potential candidates for CORS), do you have any data about which sites that might affect? Personally, I would love to see cross-origin subresource requests change to not using cookies, but that could break existing web sites that include subresources from partner sites, etc. Is there a proposal or discussion about this somewhere? I believe we have discussed this in the past and been uncertain as to whether or not this would break things on the web; we have very little real-world data as to how CORS is currently being used (if at all). I think I mentioned the possibility of instrumenting Chrome to look into this, but haven't yet done so. -- Dirk In the mean time, the canvas tainting example in the spec seems difficult to achieve. Charlie On Wed, Jul 7, 2010 at 5:05 PM, Devdatta Akhawe dev.akh...@gmail.com wrote: hmm, I think I quoted the wrong part of your email. I wanted to ask why would it be undesirable to make CORS GET requests cookie-less. It seems the argument here is reduction of implementation work. Is this the only one? Note that even AnonXmlHttpRequest intends to make GET requests cookie-less. Regards devdatta I meant undesirable in that it will require much deeper changes to browsers. I wouldn't mind making it possible to request an image or other subresource without cookies, but I don't think there's currently a mechanism for that, is there? And if there's consensus that user agents shouldn't send cookies at all on third party subresources, I'm ok with that, but I imagine there would be pushback on that sort of proposal-- it would likely affect compatibility with existing web sites. I haven't gathered any data on it, though. The benefit to allowing * with credentials is that it lets CORS work with the existing browser request logic for images and other subresources, where cookies are currently sent with the request. Charlie On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM