Re: New tests submitted by Microsoft for WebApps specs
Looks like the WorkerGlobalScope_ErrorEvent_*.htm tests are still in error as mentioned below. dave On Tue, Sep 20, 2011 at 10:53 AM, Travis Leithead travis.leith...@microsoft.com wrote: Thanks! We’ll see about getting these updated… ** ** *From:* David Levin [mailto:le...@google.com] *Sent:* Monday, September 19, 2011 6:33 PM *To:* Adrian Bateman *Cc:* Web Applications Working Group WG (public-webapps@w3.org); Israel Hilerio; Travis Leithead; Brian Raymor; Kris Krueger *Subject:* Re: New tests submitted by Microsoft for WebApps specs ** ** ** ** On Tue, Sep 13, 2011 at 6:26 PM, Adrian Bateman adria...@microsoft.com wrote: WebWorkers (51 tests/assertions) Changeset: http://dvcs.w3.org/hg/webapps/rev/7b0ba70f69b6 Tests: http://w3c-test.org/webapps/Workers/tests/submissions/Microsoft/** ** * We believe the tests are all accurate but look forward to wider review from the group. IE10 PP3 does not pass all the tests and we are working to fix the bugs that cause failures. Web Worker test issues: ** ** issue 1: WorkerGlobalScope_ErrorEvent_*.htm uses the onerror function and expected to get an error event but instead it should get 3 arguments. See http://www.whatwg.org/specs/web-apps/current-work/complete/workers.html#runtime-script-errors-0 and http://www.whatwg.org/specs/web-apps/current-work/complete/webappapis.html#report-the-error ** ** issue 2: ** ** postMessage_clone_port_error.htm expects to get INVALID_STATE_ERR but should expect to get DATA_CLONE_ERR. ** ** http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html#posting-messages ** **
Re: Shared workers - use .source instead of .ports[0] ?
What is the backwards compatibility story for websites already using SharedWorkers with the interface that has been in the spec for over a year now? There are sites using them. For example, Google Docs uses them and Google Web Toolkit exposes them. dave
Re: [FileAPI] Deterministic release of Blob proposal
It seems like this may be setting up a pattern for other dom objects which are large (like video/audio). When applied in this context, is close still a good verb for them? video.close(); dave PS I'm trying to not bikeshed too badly by avoiding a new name suggestion and allowing for the fact that close may be an ok name. On Tue, Mar 6, 2012 at 12:29 PM, Eric U er...@google.com wrote: After a brief internal discussion, we like the idea over in Chrome-land. Let's make sure that we carefully spec out the edge cases, though. See below for some. On Fri, Mar 2, 2012 at 4:54 PM, Feras Moussa fer...@microsoft.com wrote: At TPAC we discussed the ability to deterministically close blobs with a few others. As we’ve discussed in the createObjectURL thread[1], a Blob may represent an expensive resource (eg. expensive in terms of memory, battery, or disk space). At present there is no way for an application to deterministically release the resource backing the Blob. Instead, an application must rely on the resource being cleaned up through a non-deterministic garbage collector once all references have been released. We have found that not having a way to deterministically release the resource causes a performance impact for a certain class of applications, and is especially important for mobile applications or devices with more limited resources. In particular, we’ve seen this become a problem for media intensive applications which interact with a large number of expensive blobs. For example, a gallery application may want to cycle through displaying many large images downloaded through websockets, and without a deterministic way to immediately release the reference to each image Blob, can easily begin to consume vast amounts of resources before the garbage collector is executed. To address this issue, we propose that a close method be added to the Blob interface. When called, the close method should release the underlying resource of the Blob, and future operations on the Blob will return a new error, a ClosedError. This allows an application to signal when it's finished using the Blob. To support this change, the following changes in the File API spec are needed: * In section 6 (The Blob Interface) - Addition of a close method. When called, the close method releases the underlying resource of the Blob. Close renders the blob invalid, and further operations such as URL.createObjectURL or the FileReader read methods on the closed blob will fail and return a ClosedError. If there are any non-revoked URLs to the Blob, these URLs will continue to resolve until they have been revoked. - For the slice method, state that the returned Blob is a new Blob with its own lifetime semantics – calling close on the new Blob is independent of calling close on the original Blob. *In section 8 (The FIleReader Interface) - State the FileReader reads directly over the given Blob, and not a copy with an independent lifetime. * In section 10 (Errors and Exceptions) - Addition of a ClosedError. If the File or Blob has had the close method called, then for asynchronous read methods the error attribute MUST return a “ClosedError” DOMError and synchronous read methods MUST throw a ClosedError exception. * In section 11.8 (Creating and Revoking a Blob URI) - For createObjectURL – If this method is called with a closed Blob argument, then user agents must throw a ClosedError exception. Similarly to how slice() clones the initial Blob to return one with its own independent lifetime, the same notion will be needed in other APIs which conceptually clone the data – namely FormData, any place the Structured Clone Algorithm is used, and BlobBuilder. What about: XHR.send(blob); blob.close(); or iframe.src = createObjectURL(blob); blob.close(); In the second example, if we say that the iframe does copy the blob, does that mean that closing the blob doesn't automatically revoke the URL, since it points at the new copy? Or does it point at the old copy and fail? Similarly to how FileReader must act directly on the Blob’s data, the same notion will be needed in other APIs which must act on the data - namely XHR.send and WebSocket. These APIs will need to throw an error if called on a Blob that was closed and the resources are released. We’ve recently implemented this in experimental builds and have seen measurable performance improvements. The feedback we heard from our discussions with others at TPAC regarding our proposal to add a close() method to the Blob interface was that objects in the web platform potentially backed by expensive resources should have a deterministic
Re: Synchronous postMessage for Workers?
On Fri, Nov 18, 2011 at 8:16 AM, Glenn Maynard gl...@zewt.org wrote: On Thu, Nov 17, 2011 at 10:33 PM, David Levin le...@chromium.org wrote: Ah so the proposal is really only adding a new method only on DedicatedWorkerGlobalScope which send a synchronous message and something corresponding on Worker which can respond to this. There's no need for a new sending method; only a receiving method. To reuse the original example: postMessage({action: prompt_user, prompt: How about a nice game of chess?}); var msg = waitForMessage(); if(msg msg.data) { chess_game.begin(); } The other side is as usual: worker.onmessage = function(e) { worker.postMessage(true); } without caring which API the worker is using to receive the response. So the primary use case is code in the worker which has no other (async) messages coming in? dave
Re: Synchronous postMessage for Workers?
It seems like this mechanism would deadlock a worker if two workers send each other a synchronous message. dave On Thu, Nov 17, 2011 at 10:37 AM, Joshua Bell jsb...@chromium.org wrote: Jonas and I were having an offline discussing regarding the synchronous Indexed Database API and noting how clean and straightforward it will allow Worker scripts to be. One general Worker issue we noted - independent of IDB - was that there are cases where Worker scripts may need to fetch data from the Window. This can be done today using bidirectional postMessage, but of course this requires the Worker to then be coded in now common asynchronous JavaScript fashion, with either a tangled mess of callbacks or some sort of Promises/Futures library, which removes some of the benefits of introducing sync APIs to Workers in the first place. Wouldn't it be lovely if the Worker script could simply make a synchronous call to fetch data from the Window? GTNW.prototype.end = function () { var result = self.sendMessage({action: prompt_user, prompt: How about a nice game of chess?}); if (result) { chess_game.begin(); } } The requirement would be that the Window side is asynchronous (of course). Continuing the silly example above, the Window script responds to the message by fetching some new HTML UI via async XHR, adding it to the DOM, and only after user input and validation events is a response sent back to the Worker, which proceeds merrily on its way. I don't have a specific API suggestion in mind. On the Worker side it should take the form of a single blocking call taking the data to be passed and possibly a timeout, and allowing a return value (on timeout return undefined or throw?). On the Window side it could be a new event on Worker which delivers a Promise type object which the Window script can later fulfill (or break). Behavior on multiple event listeners would need to be defined (all get the same Promise, first fulfill wins, others throw?).
Re: Synchronous postMessage for Workers?
On Thu, Nov 17, 2011 at 5:05 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 17, 2011 at 2:07 PM, David Levin le...@chromium.org wrote: It seems like this mechanism would deadlock a worker if two workers send each other a synchronous message. Indeed. We can only allow child workers to block on parent workers. Never the other way around. So the api would have to know who is listening to the other end of the port and throw if it isn't a parent? dave
Re: Synchronous postMessage for Workers?
On Thu, Nov 17, 2011 at 6:41 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 17, 2011 at 6:05 PM, David Levin le...@chromium.org wrote: On Thu, Nov 17, 2011 at 5:05 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Nov 17, 2011 at 2:07 PM, David Levin le...@chromium.org wrote: It seems like this mechanism would deadlock a worker if two workers send each other a synchronous message. Indeed. We can only allow child workers to block on parent workers. Never the other way around. So the api would have to know who is listening to the other end of the port and throw if it isn't a parent? I'm not convinced that we can do this with ports in a sane manner. Only on dedicated workers. There you always know who is listening on the other end. I.e. only on the DedicatedWorkerGlobalScope/Worker interfaces. Ah so the proposal is really only adding a new method only on DedicatedWorkerGlobalScope which send a synchronous message and something corresponding on Worker which can respond to this. This proposal as you see it does nothing for port or shared workers. That seems to make more sense now. dave
Re: New tests submitted by Microsoft for WebApps specs
On Tue, Sep 13, 2011 at 6:26 PM, Adrian Bateman adria...@microsoft.comwrote: WebWorkers (51 tests/assertions) Changeset: http://dvcs.w3.org/hg/webapps/rev/7b0ba70f69b6 Tests: http://w3c-test.org/webapps/Workers/tests/submissions/Microsoft/ * We believe the tests are all accurate but look forward to wider review from the group. IE10 PP3 does not pass all the tests and we are working to fix the bugs that cause failures. Web Worker test issues: issue 1: WorkerGlobalScope_ErrorEvent_*.htm uses the onerror function and expected to get an error event but instead it should get 3 arguments. See http://www.whatwg.org/specs/web-apps/current-work/complete/workers.html#runtime-script-errors-0 and http://www.whatwg.org/specs/web-apps/current-work/complete/webappapis.html#report-the-error issue 2: postMessage_clone_port_error.htm expects to get INVALID_STATE_ERR but should expect to get DATA_CLONE_ERR. http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html#posting-messages
cross origin workers [was Re: [workers] Moving the Web Workers spec back to Last Call WD]
On Mon, Feb 14, 2011 at 2:18 AM, Ian Hickson i...@hixie.ch wrote: On Sat, 12 Feb 2011, Arthur Barstow wrote: Regarding re-publishing the Web Workers spec [ED] as a new Last Call Working Draft ... Bugzilla shows one open bug [Bugs]: 11818 - As documented in the Creating workers section, a worker *must* be an external script. http://www.w3.org/Bugs/Public/show_bug.cgi?id=11818 What high priority work must be done such that this spec is ready to be re-published as a new Last Call Working draft? None, to my knowledge. The bug above is a feature request. In particular, what are the proposals, plans and timeline to address the above bug? I expect to address the issue of supporting data: URL scripts in workers at the same time as adding the ability to do cross-origin shared workers, currently estimated to be in 6 to 18 months, depending on browser implementation progress on other features in the same timeframe. I've been asked by folks who work on Google Docs about cross-origin shared workers, so I wanted to bring up this issue again -- also in light of the fact that it has been ~6 months :). dave
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Tue, Jun 21, 2011 at 11:43 PM, Glenn Maynard gl...@zewt.org wrote: On Wed, Jun 22, 2011 at 1:57 AM, David Levin le...@chromium.org wrote: Let's say the call doesn't throw when given a type B that isn't transferrable. Let's also say some later changes the javascript code and uses B after the postMessage call. Everything work. No throw is done and B isn't gutted because it isn't transferrable. Throwing for unsupported objects will break common, reasonable uses, making everyone jump hoops to use transfer. Not throwing will only break code that seriously misuses the API, by listing objects for transfer and then continuing to use the object. Code always seriously misuses apis. See Raymond Chen's blog for numerous examples if you have any doubt (http://blogs.msdn.com/b/oldnewthing/).
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Wed, Jun 22, 2011 at 12:26 AM, Glenn Maynard gl...@zewt.org wrote: On Wed, Jun 22, 2011 at 3:14 AM, David Levin le...@chromium.org wrote: On Tue, Jun 21, 2011 at 11:43 PM, Glenn Maynard gl...@zewt.org wrote: On Wed, Jun 22, 2011 at 1:57 AM, David Levin le...@chromium.org wrote: Let's say the call doesn't throw when given a type B that isn't transferrable. Let's also say some later changes the javascript code and uses B after the postMessage call. Everything work. No throw is done and B isn't gutted because it isn't transferrable. Throwing for unsupported objects will break common, reasonable uses, making everyone jump hoops to use transfer. Not throwing will only break code that seriously misuses the API, by listing objects for transfer and then continuing to use the object. Code always seriously misuses apis. See Raymond Chen's blog for numerous examples if you have any doubt (http://blogs.msdn.com/b/oldnewthing/). You didn't respond to the rest of my mail, where I pointed out that the misuse case will end up broken anyway as everyone will likely use a wrapper to get the no-exception behavior. You'd have to, to support older browsers which don't support transfer for as many types. If you insist Making people use a helper function like that is just making them jump an unnecessary hoop. It makes them jump through another hoop to potentially misuse the api. It shouldn't be simple to misuse apis. Also, I haven't seen mention of Transferrable else where in the final proposed solution which you used in that code. dave
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Wed, Jun 22, 2011 at 2:31 AM, Glenn Maynard gl...@zewt.org wrote: On Wed, Jun 22, 2011 at 4:33 AM, David Levin le...@chromium.org wrote: Making people use a helper function like that is just making them jump an unnecessary hoop. It makes them jump through another hoop to potentially misuse the api. No, it's another hoop that *everyone* has to jump through to use the API at all, so code you write in browser N+1 would also work in browser N where fewer classes support transfer. Jumping that hoop is not the misuse; it's a direct requirement of the API. *Because* everyone would be doing that, the misuse will also be possible. The throwing aspect is a useful debugging tool to warn developers about misuse that they wouldn't be aware of otherwise. (Not everyone will read the spec as closely as you do -- may will just write code to get the job done and declare it done when it works -- a not throwing api may still be fast but will be deceptive in allowing parameters and not acting on them leading to the discussed misuse by accident -- it happens all the time by people who are trying to be careful even.) Your given code isn't the only possible solution. However, it is one that imposes M choose N combinations for the code resulting in lots of testing combinations. I could easily envision several alternatives to your snippet which could be written which would be better. For example, one could write code that either send all parameters using fast path or none. (This is only two test cases and they could have a debug switch to make everything be passed in the fast method to ensue that it worked correctly -- note this isn't possible if the api doesn't throw.) I'm fairly certain that we won't agree. Due to my many experiences with people including myself misusing apis and not realizing it, I strongly prefer an api that warns of misuse. You don't. I see either of us saying anything new so I don't plan to continue this discussion further because I believe that we have both laid our points of view in vivid detail :). best wishes, dave
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Tue, Jun 21, 2011 at 9:27 PM, Glenn Maynard gl...@zewt.org wrote: What happens if an object is included in the second list that doesn't support transfer? Ian said that it would throw, but I'm not sure that's best. If it doesn't throw, doesn't that introduce the backwards compat issue when something new is supported that wasn't before? Suppose Firefox N supports transferring ArrayBuffer, and Firefox N+1 adds support for transferring ImageData. Developers working with Firefox N+1 write the following: postMessage(obj, [obj.anArrayBuffer, obj.anImageData]); On Firefox N+1, both objects will be transferred, mutating the sender's copy. In Firefox N, only the ArrayBuffer will be transferred, and the ImageData is cloned. Developers don't need to write wrappers to scan through the list and remove objects which don't support transfer. They don't have to test code with every version of browsers to make sure that their transfer lists are created correctly for every possible combination of supported object transfers. They just list the objects which they're prepared to have mutated and will discard after sending the message. postMessage([thisArrayBufferIsCopied, thisPortIsTransferred], [thisPortIsTransferred]); This also has the benefit of being backwards-compatible, at least if I keep an attribute on the event on the other side called ports that includes the transferred objects (maybe another attribute should be included that also returns the same array, since 'ports' would now be a confusing misnomer). I don't think the ports array should be expanded to include all transferred objects. This would turn the list into part of the user's messaging protocol, which I think is inherently less clear. Protocols using this API will be much cleaner when they're based on a single object graph. I'd recommend that the ports array contain only the transferred ports, for backwards-compatibility; anything else transferred should be accessed via the object graph. This also means that the transfer list has no visible effects to receivers. Senders can choose to add or not add objects for transfer based on their needs, without having to worry that the receiver of the message might depend on the transfer list having a particular format (with the exception of message ports). Having the transfer list both act as part of the messaging protocol *and* change the side-effects for the sender will create conflicts, where you'll want to clone (not transfer) an object but be forced to include it in the transfer list to satisfy the receiver. -- Glenn Maynard
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Tue, Jun 21, 2011 at 10:48 PM, Glenn Maynard gl...@zewt.org wrote: On Wed, Jun 22, 2011 at 1:25 AM, David Levin le...@chromium.org wrote: On Tue, Jun 21, 2011 at 9:27 PM, Glenn Maynard gl...@zewt.org wrote: What happens if an object is included in the second list that doesn't support transfer? Ian said that it would throw, but I'm not sure that's best. If it doesn't throw, doesn't that introduce the backwards compat issue when something new is supported that wasn't before? The backwards-compat issue that we've talked about before is when transfer happens without opting into it explicitly for each object or type. For example, transferEverythingPossible([A, B]) would cause this problem: if A supports transfer when you write the code and B does not, then B gaining support a year later might break your code. I can't think of backwards-compat issues with not throwing. Can you give an example? You just gave it! :) Let's say the call doesn't throw when given a type B that isn't transferrable. Let's also say some later changes the javascript code and uses B after the postMessage call. Everything work. No throw is done and B isn't gutted because it isn't transferrable. Now let's say Firefox supports transferring B, the site breaks because it was never tested like this, so Firefox has to choose between compatibility and a supporting transferring B. (Of course, this will be in some sample that everyone copies :).) dave
Re: FW: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Fri, Jun 10, 2011 at 12:50 PM, Travis Leithead travis.leith...@microsoft.com wrote: From: Kenneth Russell [mailto:k...@google.com], Sent: Thursday, June 09, 2011 11:15 PM On Thu, Jun 9, 2011 at 10:54 PM, Travis Leithead travis.leith...@microsoft.com wrote: Honestly, there's something about this whole discussion that just doesn't feel right. I looks like we're trying to graft-in this new concept of transfer of ownership into the existing postMessage semantics (i.e., object cloning). Any way I try to make it work, it just looks like peaches grafted into an apple tree. What happened to Jonas' other proposal about a new API? I'd like to direct some mental energy into that proposal. Complexity comes in many forms and shapes. I much more like the idea of explicit APIs that make it clear what happens and make it hard to shoot yourself in the foot. Yes, it can involve more typing, but if it results in more resilient code which contains fewer subtle bugs, then I think we have designed the API well. / Jonas Ex: void postMessageAndTransfer([in] any transferOwnershipToDestination...); We're only talking about a scenario that makes sense primarily for Web Workers and applies only to certain types like ArrayBuffer, CanvasPixelArray+ImageData, Blob, File, etc., that have large CanvasPixelArray+underlying memory buffers. We don't really need to support JavaScript objects, arrays, complex graphs, etc. at all with the new API (and since the current proposal requires the web developer to make an explicit list anyway for the 2nd param to post message, it's no _more_ work to do the same for a new API). We could even try to graft MessagePorts into this API, but why? MessagePorts are unique in function compared to the other objects we are discussing for transfer of ownership (e.g., they facilitate further messaging and can't be re-posted once they are cloned once), and they already have well-defined behavior in MessageEvents and SharedWorkers. I propose keeping postMessage exactly as it is. Let's eliminate the potential compatibility issues. Let's not re-write the existing specs (that feels like going backwards, not forwards). For transfer of ownership, let's bring this capability on-line through a new API, for the specific scenario where it makes sense (Web Workers) and not pollute the current postMessage concepts (object graph cloning and port- passing). Travis, I disagree with your statement that MessagePorts are unique in function compared to the other objects we are discussing for transfer of ownership. Cloning a MessagePort per http://dev.w3.org/html5/postmsg/#clone-a-portis *exactly* transferring its ownership to the other side. The reason that a MessagePort object can only be cloned once is that its ownership has been transferred. There is no restriction in the current specification preventing the cloned port from being transferred to a new owner via postMessage. This looks like a mis-reading on my part of step 2 of the postMessage algorithm: 2.If the method was called with a second argument ports and that argument isn't null, then, if any of the entries in ports are null, if any MessagePort object is listed in ports more than once, ***if any of the MessagePort objects listed in ports have already been cloned once before, or if any of the entries in ports are either the source port or the target port (if any), then throw an INVALID_STATE_ERR exception. Depending on how you look at it, this could be referring to the underlying port object, or to the current port instance. Ian is on this thread--I assume you now meant purely transfer of ownership? The current proposal on the table is 100% backward compatible in signature and semantics, and is an elegant generalization of the slightly over- specialized MessagePort mechanism into the desired transfer of ownership mechanism. I guess I disagree on the elegant assertion, but that's neither here nor there when it comes to spec-ing a behavior. In any other API I would personally want exactly postMessage's capability of sending full JavaScript object graphs over the wire, while still being able to transfer ownership of some of the objects contained within, to be able to add some structure to the messages being sent. I would not want to artificially restrict the API to only be able to send certain types of objects. If this [supporting the object graph and transfer of ownership in-context] is an absolute MUST requirement, then the real difference between what we are proposing today versus a new API is that the new API is an all-in transfer-of-ownership (all applicable objects will be transferred rather than cloned). The downside to an all-in API is that as existing objects get transfer-of-ownership behavior that breaks the new API is non-backwards compatible way (e.g., API
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Wed, Jun 8, 2011 at 2:24 PM, Kenneth Russell k...@google.com wrote: My understanding is that we have reached a proposal which respecifies the ports argument to postMessage as an array of objects to transfer, in such a way that we: Array or object? (by object I mean: {transfer: [arrayBuffer1], ports: [port]}) - Maintain 100% backward compatibility - Enhance the ability to pass MessagePorts, so that the object graph can refer to them as well - Allow more object types to participate in transfer of ownership in the future To the best of my knowledge there are no active points of disagreement. I think we are only waiting for general consensus from all interested parties that this is the desired step to take. If it is, I would be happy to draft proposed edits to the associated specs; there are several, and the edits may be somewhat involved. I'd also be happy to share the work with Ian or anyone else. I don't know the various processes for web specs, but the Web Messaging spec will definitely need to be updated if we decide to move in this direction. -Ken On Wed, Jun 8, 2011 at 4:30 AM, Arthur Barstow art.bars...@nokia.com wrote: Now that the responses on this thread have slowed, I would appreciate if the participants would please summarize where they think we are on this issue, e.g. the points of agreement and disagreement, how to move forward, etc. Also, coming back to the question in the subject (and I apologize if my premature subject change caused any confusion or problems), since we have an open CfC (ends June 9 [1]) to publish a Candidate Recommendation of Web Messaging, is the Messaging spec going to need to change to address the issues raised in this thread? -Art Barstow [1] http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/0797.html
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
ok. On Wed, Jun 8, 2011 at 4:27 PM, Kenneth Russell k...@google.com wrote: On Wed, Jun 8, 2011 at 2:39 PM, David Levin le...@chromium.org wrote: On Wed, Jun 8, 2011 at 2:33 PM, Kenneth Russell k...@google.com wrote: I prefer continuing to use an array for several reasons: simpler syntax, better type checking at the Web IDL level, and fewer ECMAScript-specific semantics. An array makes it harder to do future modifications. Possibly, but it makes the design of this modification cleaner. Also with the array, how does Enhance the ability to pass MessagePorts, so that the object graph can refer to them as well work? Specifically, consider an array that contains [arrayBuffer1, port1]. Is port1 something in the object graph or a port to be transfer as before? In order to maintain backward compatibility, the clone of port1 would show up in the ports attribute of the MessageEvent on the other side. Additionally, during the structured clone of the object graph, any references to port1 would be updated to point to the clone of port1. (The latter is new behavior, and brings MessagePorts in line with the desired transfer-of-ownership semantics.) All other objects in the array (which, as Ian originally proposed, would implement some interface like Transferable for better Web IDL type checking) would simply indicate objects in the graph to be transferred rather than copied. Note: it would still be possible to evolve the API to transfer all objects of a certain type. We would just need to change the type of the ports or transfer array from Transferable[] to any[] and spec what happens when a constructor function is placed in the array. -Ken dave -Ken On Wed, Jun 8, 2011 at 2:29 PM, David Levin le...@chromium.org wrote: On Wed, Jun 8, 2011 at 2:24 PM, Kenneth Russell k...@google.com wrote: My understanding is that we have reached a proposal which respecifies the ports argument to postMessage as an array of objects to transfer, in such a way that we: Array or object? (by object I mean: {transfer: [arrayBuffer1], ports: [port]}) - Maintain 100% backward compatibility - Enhance the ability to pass MessagePorts, so that the object graph can refer to them as well - Allow more object types to participate in transfer of ownership in the future To the best of my knowledge there are no active points of disagreement. I think we are only waiting for general consensus from all interested parties that this is the desired step to take. If it is, I would be happy to draft proposed edits to the associated specs; there are several, and the edits may be somewhat involved. I'd also be happy to share the work with Ian or anyone else. I don't know the various processes for web specs, but the Web Messaging spec will definitely need to be updated if we decide to move in this direction. -Ken On Wed, Jun 8, 2011 at 4:30 AM, Arthur Barstow art.bars...@nokia.com wrote: Now that the responses on this thread have slowed, I would appreciate if the participants would please summarize where they think we are on this issue, e.g. the points of agreement and disagreement, how to move forward, etc. Also, coming back to the question in the subject (and I apologize if my premature subject change caused any confusion or problems), since we have an open CfC (ends June 9 [1]) to publish a Candidate Recommendation of Web Messaging, is the Messaging spec going to need to change to address the issues raised in this thread? -Art Barstow [1] http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/0797.html
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Thu, Jun 2, 2011 at 10:17 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Jun 2, 2011 at 4:41 PM, David Levin le...@chromium.org wrote: None of the objects which allow transferring of ownership has children so this doesn't appear to be a problem at this time. If it indeed does turn into a problem, it would seem like a problem no matter what solution is used, no? Not if all objects are transferred. Define all objects. Consider something like: a = { x: myArrayBuffer1, y: myArrayBuffer2 }; worker.postMessage(a, { transfer: true }); In this case the 'a' object is obviously not transferred. Or are you proposing that it'd be transferred too somehow? Well the algorithm could empty 'a'. As far as what happens underneath the covers that is up to the implementation. (I suspect most javascript engines today wouldn't allow for actually transferring the memory cross thread.) Here's a simple use case, suppose I create an array of arrays (a 2d array) which contains ArrayBuffers.Now I want to transfer this as fast as possible using postMessage. What does my code look like for each of these apis? Your proposal: w.postMessage(my2darray, {transfer: true}); vs. w.postMessage(my2darray, Array.concat.apply(Array, my2darray)); I thought this would be: w.postMessage(my2darray, {transfer: Array.concat.apply(Array, my2darray)}); Now show me the code needed to send a message which contains one big buffer from you that you want to transfer, along with some data that you got from some other piece of code and which you do not want to modify and which may or may not contain ArrayBuffers. Fair enough. Way more complicated for the transfer: true case. :) The thing that seemed odd to me about this extra array is that it seemed to make the code more complicated and harder to understand. However, I understand that folks want to support involved scenarios. *Let's go with the transfer list.* (I suspect that something like transfer: true or transfer: all would still be possible in the future if it proved desirable since bools/strings won't be valid there.) dave
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
In summary, there is a desire for a mechanism to transfer objects (to allow for potentially better perf) across a MessagePort. The mechanism: - needs to have an intuitive feel for developers, - must preserve backwards compatibility, - should ideally allow the port to function the same regardless of whether the message was cloned or transferred. - should be easy to use. There are three ideas for how to accomplish this: 1. Mixing in the list of objects to be cloned with the ports and use that list to determine what objects in the message should be cloned. This allows a lot of flexibility. It feels odd mixing in a list of objects with the ports when the two have nothing related. It also feels complicated having to add objects in two places (the message and this extra array). 2. Adding another parameter to postMessage clone/transfer or true/false, etc. It is less flexible than 1. It is very simple and easy to use. It may not be as noticeable when reading the code that this postMessage does a transfer of items. 3. Adding another method transferMessage with the same parameters as postMessage. It is less flexible than 1. It is very simple and easy to use. It may be a pain to keep this in sync with postMessage. It should be very noticeable when reading code. What do you think is the best way to expose this to web developers? dave
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Thu, Jun 2, 2011 at 1:13 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/2/11 3:53 PM, David Levin wrote: The mechanism: * needs to have an intuitive feel for developers, * must preserve backwards compatibility, * should ideally allow the port to function the same regardless of whether the message was cloned or transferred. I'm not sure what you mean by that third item... the obvious meaning, which is that clone vs transfer is not black-box observable to the code calling postMessage makes no sense. The receiver of the message is what I meant to say. My edits lost some of the context. dave There are three ideas for how to accomplish this: 4. Having separate arguments (in some order) for the ports and the list of objects to transfer. -Boris
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Thu, Jun 2, 2011 at 1:27 PM, Glenn Maynard gl...@zewt.org wrote: On Thu, Jun 2, 2011 at 4:16 PM, Kenneth Russell k...@google.com wrote: On Thu, Jun 2, 2011 at 12:53 PM, David Levin le...@chromium.org wrote: The desire would be for this change to apply not just to the postMessage method on MessagePort and Worker but also to that on Window. I agree--the postMessage interfaces shouldn't drift apart more than necessary. Adding another argument to window.postMessage would be unfortunate, though: that brings it up to four, which is hitting the limit of a sane, rememberable API. Alternatively--and this has come up repeatedly of late--allow the final argument to be an object. If it's an object, it looks like this: port.postMessage({frameBuffer: frame}, {transfer: [frame], ports: [port]}); where transfer and port are both optional. If it's an array, it behaves as it does now. This also allows preserving MessagePort error checking: you can still throw INVALID_STATE_ERR if something other than a MessagePort is included in ports. It feels like this array of objects given to transfer may complicate (and slow down) both the implementation of this as well as the developer's use of it. It also raises questions when I see it. When I list an object there does it imply that all children are also transfered or do I have to list each of them explicitly as well?) Then I wonder what is the use case for this complexity. Why not something simpler like this? port.postMessage({frameBuffer: frame}, {transfer: true, ports: [port]}); where you can just say indicate that you want the message transfered. dave
Re: What changes to Web Messaging spec are proposed? [Was: Re: Using ArrayBuffer as payload for binary data to/from Web Workers]
On Thu, Jun 2, 2011 at 4:24 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, Jun 2, 2011 at 2:01 PM, David Levin le...@chromium.org wrote: On Thu, Jun 2, 2011 at 1:27 PM, Glenn Maynard gl...@zewt.org wrote: port.postMessage({frameBuffer: frame}, {transfer: [frame], ports: [port]}); There are two properties of this approach that I like: 1. It means that objects which you'd like to transfer ownership are not second class citizens and can live as part of the normal object graph that is posted, together with metadata that goes with it (or even as metadata for other things). 2. The receiving side doesn't need to worry about the difference, all it gets is the graph of objects that was sent to it. Yep, I totally agree with this. All of the current solutions being discussed satisfy both of these in fact. It also raises questions when I see it. When I list an object there does it imply that all children are also transfered or do I have to list each of them explicitly as well?) None of the objects which allow transferring of ownership has children so this doesn't appear to be a problem at this time. If it indeed does turn into a problem, it would seem like a problem no matter what solution is used, no? Not if all objects are transferred. Then I wonder what is the use case for this complexity. Why not something simpler like this? port.postMessage({frameBuffer: frame}, {transfer: true, ports: [port]}); where you can just say indicate that you want the message transfered. This means that you have to choose between transferring all arrays and transferring none of them. Yep, why add the complication of picking individual items to transfer over? (I'm wary of second system syndrome, which I've seen happen many times in various designs.) It also makes it much less explicit which objects ends up being mutated. It is all of them. Here's a simple use case, suppose I create an array of arrays (a 2d array) which contains ArrayBuffers.Now I want to transfer this as fast as possible using postMessage. What does my code look like for each of these apis? dave
Re: [XHR2] ArrayBuffer integration
fwiw, specifying up front is what FileReader appears to do: http://dev.w3.org/2006/webapi/FileAPI/#dfn-filereader Of course, there are different methods in that case. dave On Tue, Sep 28, 2010 at 3:12 PM, Chris Rogers crog...@google.com wrote: Based on these constraints, it sounds like we either have to live with the fact that we'll keep both a binary copy and a text copy around as we're receiving XHR bytes. Or, we need a way to specify up-front that we're interested in loading as binary (before calling send()) and not handle binary in the default case. Chris On Tue, Sep 28, 2010 at 12:05 PM, James Robinson jam...@google.com wrote: On Tue, Sep 28, 2010 at 9:39 AM, Boris Zbarsky bzbar...@mit.edu wrote: On 9/28/10 10:32 AM, Chris Marrin wrote: I'd hate the idea of another flag in XHR. Why not just keep the raw bits and then convert when responseText is called? The only disadvantage of this is when the author makes multiple calls to responseText and I would not think that is a very common use case. It's actually reasonably common; Gecko had some performance bugs filed on us until we started caching the responseText (before that we did exactly what you just suggested). Oh, and some sites poll responseText from progress events for reasons I can't fathom. A number of sites check .responseText.length on every progress event in order to monitor how much data has been received. This came up as a performance hotspot when I was profiling WebKit's XHR implementation as well. - James -Boris
Re: Lifetime of Blob URL
On Tue, Jul 13, 2010 at 12:48 AM, Anne van Kesteren ann...@opera.comwrote: On Mon, 12 Jul 2010 23:30:54 +0200, Darin Fisher da...@chromium.org wrote: Right, it seems reasonable to say that ownership of the resource referenced by a Blob can be shared by a XHR, Image, or navigation once it is told to start loading the resource. Note that unless we make changes (separate thread please) to XMLHttpRequest this will not work. It is a cross-origin URL which cannot work with CORS. It seems the origin for blob.url is defined to make it not cross-origin: http://dev.w3.org/2006/webapi/FileAPI/#originOfBlob and here http://dev.w3.org/2006/webapi/FileAPI/#url it mentions using the url for xhr again. dave -- Anne van Kesteren http://annevankesteren.nl/
Re: Lifetime of Blob URL
On Tue, Jul 13, 2010 at 6:50 AM, Adrian Bateman adria...@microsoft.comwrote: On Monday, July 12, 2010 2:31 PM, Darin Fisher wrote: On Mon, Jul 12, 2010 at 9:59 AM, David Levin le...@google.com wrote: On Mon, Jul 12, 2010 at 9:54 AM, Adrian Bateman adria...@microsoft.com wrote: I read point #5 to be only about surviving the start of a navigation. As a web developer, how can I tell when a load has started for an img? Isn't this similarly indeterminate. As soon as img.src is set. the spec could mention that the resource pointed by blob URL should be loaded successfully as long as the blob URL is valid at the time when the resource is starting to load. Should apply to xhr (after send is called), img, and navigation. Right, it seems reasonable to say that ownership of the resource referenced by a Blob can be shared by a XHR, Image, or navigation once it is told to start loading the resource. -Darin It sounds like you are saying the following is guaranteed to work: img.src = blob.url; window.revokeBlobUrl(blob); return; If that is the case then the user agent is already making the guarantees I was talking about and so I still think having the lifetime mapped to the blob not the document is better. This means that in the general case I don't have to worry about lifetime management. Mapping lifetime to the blob exposes when the blob gets garbage collected which is a very indeterminate point in time (and is very browser version dependent -- it will set you up for compatibility issues when you update your javascript engine -- and there are also the cross browser issues of course). Specifically, a blob could go out of scope (to use your earlier phrase) and then one could do img.src = blobUrl (the url that was exposed from the blob but not using the blob object). This will work sometimes but not others (depending on whether garbage collection collected the blob). This is much more indeterminate than the current spec which maps the blob.url lifetime to the lifetime of the document where the blob was created. When thinking about blob.url lifetime, there are several problems to solve: 1. An AJAX style web application may never navigate the document and this means that every blob for which a URL is created must be kept around in some form for the lifetime of the application. 2. A blob passed to between documents would have its blob.url stop working as soon as the original document got closed. 3. Having a model that makes the url have a determinate lifetime which doesn't expose the web developer to indeterminate behaviors issues like we have discussed above. The current spec has issues #1 and #2. Binding the lifetime of blob.url to blob has issue #3. dave
Re: Lifetime of Blob URL
On Mon, Jul 12, 2010 at 5:47 AM, Adrian Bateman adria...@microsoft.comwrote: Making the blob url identical to the lifetime of the blob itself would expose when garbage collection takes place and in general could lead to easy to make mistakes in which the developer had something that work mostly but not always -- your situation below is just one of them. Check out the Jian Li's alternate proposal (see his response to Re: [File API] Recent Updates To Specification + Co-Editor on July 1, I think) that addresses this in a way that addresses your concerns and the gc issue as well. The problem with an explicit revoke call is that people need to know to call it, need to actually call it, and need to know when it is appropriate to call. Many of the same timing issues that cause potential problems with GC also make it hard for web developers to know when to call revoke. When GC occurs is indeterminate and would vary greatly between browsers. Developing features which exposes the gc behavior would lead developers into accidentally relying on browser specific behaviors (which may even break for the same browser during upgrades). As I read Jian's proposal, there is a create call (blob.url would go away), so there would clearly be a revoke (or destroy call). With respect to timing issues, the behavior of revoke with respect to load is clearly defined in his proposal which result in very deterministic behavior. dave
Re: Lifetime of Blob URL
On Mon, Jul 12, 2010 at 8:39 AM, Adrian Bateman adria...@microsoft.comwrote: On Monday, July 12, 2010 8:24 AM, David Levin wrote: On Mon, Jul 12, 2010 at 5:47 AM, Adrian Bateman adria...@microsoft.com wrote: Making the blob url identical to the lifetime of the blob itself would expose when garbage collection takes place and in general could lead to easy to make mistakes in which the developer had something that work mostly but not always -- your situation below is just one of them. Check out the Jian Li's alternate proposal (see his response to Re: [File API] Recent Updates To Specification + Co-Editor on July 1, I think) that addresses this in a way that addresses your concerns and the gc issue as well. The problem with an explicit revoke call is that people need to know to call it, need to actually call it, and need to know when it is appropriate to call. Many of the same timing issues that cause potential problems with GC also make it hard for web developers to know when to call revoke. When GC occurs is indeterminate and would vary greatly between browsers. Developing features which exposes the gc behavior would lead developers into accidentally relying on browser specific behaviors (which may even break for the same browser during upgrades). The behaviour would have to be explicitly specified and not left to depend on indeterminate browser implementations. Yes. Unfortunately, another way of saying that the url lives as long as the Blob lives is the url lives until the Blob is garbage collected. This exposes a very indeterminate behavior. As I read Jian's proposal, there is a create call (blob.url would go away), so there would clearly be a revoke (or destroy call). With respect to timing issues, the behavior of revoke with respect to load is clearly defined in his proposal which result in very deterministic behavior. My apologies. I think I missed this part - please can you provide a link to the full proposal? At what point after I assign the src of an img element can I call revoke? Can I do it immediately after the assignment or do I have to have an onload and onerror handler for every element that uses it? With XHR do I have to wait for readyState 4 or can I call revoke earlier in the process? http://lists.w3.org/Archives/Public/public-device-apis/2010Jul/.html See point #5 basically once a load has started for a url, that load should succeed and revoke may be called. Is there reference counting so that if I call create twice I have to call revoke twice? I don't think this has been specified, but a simple proposal would be that each create call would result in a unique url. Thanks, Adrian.
Re: Lifetime of Blob URL
On Mon, Jul 12, 2010 at 9:54 AM, Adrian Bateman adria...@microsoft.comwrote: On Monday, July 12, 2010 9:32 AM, David Levin wrote: On Mon, Jul 12, 2010 at 8:39 AM, Adrian Bateman adria...@microsoft.com wrote: The behaviour would have to be explicitly specified and not left to depend on indeterminate browser implementations. Yes. Unfortunately, another way of saying that the url lives as long as the Blob lives is the url lives until the Blob is garbage collected. This exposes a very indeterminate behavior. Exactly. So what I'm saying is the spec needs to say more than just that. It needs to make further guarantees. http://lists.w3.org/Archives/Public/public-device-apis/2010Jul/.html See point #5 basically once a load has started for a url, that load should succeed and revoke may be called. I read point #5 to be only about surviving the start of a navigation. As a web developer, how can I tell when a load has started for an img? Isn't this similarly indeterminate. As soon as img.src is set. the spec could mention that the resource pointed by blob URL should be loaded successfully as long as the blob URL is valid at the time when the resource is starting to load. Should apply to xhr (after send is called), img, and navigation. Regards, Adrian.
Re: Lifetime of Blob URL
On Sun, Jul 11, 2010 at 10:05 PM, Adrian Bateman adria...@microsoft.comwrote: On Monday, June 28, 2010 2:47 PM, Arun Ranganathan wrote: On 6/23/10 9:50 AM, Jian Li wrote: I think encoding the security origin in the URL allows the UAs to do the security origin check in place, without routing through other authority to get the origin information that might cause the check taking long time to finish. If we worry about showing the double schemes in the URL, we can transform the origin encoded in the URL by using base64 or other escaping algorithm. Jian: the current URL scheme: http://dev.w3.org/2006/webapi/FileAPI/#url allows you to do that, without obliging other UAs to do that. Some UAs may elect to use smart caching to accomplish the same kinds of things, without tagging the URL with origin information. Others may see benefit in origin-tagging. I've reconsidered trying to architect a scheme that allows all use-case scenarios for blob: URIs. Hi Arun, I think the updated URL section reflects a good compromise. We might want to call out explicitly that opaque string should not include recognisable metadata to avoid scripts from trying to parse the URL. User Agents that wish to include data such as origin should do so by encoding it in an opaque manner. Specifying the format of contents of the url is simply an overspecification. Saying User Agents that wish to include data such as origin should do so by encoding it in an opaque manner. is ambiguious. As soon as anyone publishes the format (which would be trivial to do given chromium's open source), the format would no longer be opaque. I have one other concern about the lifetime of the blob URL [1]. The spec currently says that blob: URLs should have the lifetime of the Document. I think this is too long. An AJAX style web application may never navigate the document and this means that every blob for which a URL is created must be kept around in some form for the lifetime of the application. In our discussions on this topic at Microsoft we've assumed that the lifetime for a blob URL will be the same as the lifetime of the blob itself. Making the blob url identical to the lifetime of the blob itself would expose when garbage collection takes place and in general could lead to easy to make mistakes in which the developer had something that work mostly but not always -- your situation below is just one of them. Check out the Jian Li's alternate proposal (see his response to Re: [File API] Recent Updates To Specification + Co-Editor on July 1, I think) that addresses this in a way that addresses your concerns and the gc issue as well. This does create something of a race condition. If I have a blob representing an image where I set the src of an img element and then let the blob go out of scope might it be collected before the img loads the resource? We think we'll have to include some mechanism to ensure that the blob and URL doesn't get collected before pending network requests have been made. This does impose an additional burden on the web developer: if they later want to copy the source URL from one img to another then this will only work if they also kept the blob in scope somewhere. What do you think? Cheers, Adrian. [1] http://dev.w3.org/2006/webapi/FileAPI/#lifeTime
Re: Updates to File API
On Tue, Jun 22, 2010 at 8:56 PM, Adrian Bateman adria...@microsoft.comwrote: On Tuesday, June 22, 2010 8:40 PM, David Levin wrote: I agree with you Adrian that it makes sense to let the user agent figure out the optimal way of implementing origin and other checks. A logical step from that premise is that the choice/format of the namespace specific string should be left up to the UA as embedding information in there may be the optimal way for some UA's of implementing said checks, and it sounds like other UAs may not want to do that. Robin outlined why that would be a problem [1]. My original feeling was that this should be left up to UAs, as you say, but I've been convinced that doing so is a race to the most complex URL scheme. Robin discussed something that could possibly in http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0743.html. At the same time, there are implementors who gave specific reasons why encoding certain information (scheme, host, port) in the namespace specific string (NSS) is useful to various UAs. No other information has been requested, so theories adding more information seem premature. If the format must be specified, it seems reasonable to take both the theoretical and practical issues into account. Encoding that the security origin in the NSS isn't complex. If a proposal is needed about how that can be done in a simple way, I'm willing to supply one. Also, UAs that don't care about that information are free to ignore it and don't need to parse it. dave
Re: Updates to File API
On Tue, Jun 22, 2010 at 7:58 PM, Adrian Bateman adria...@microsoft.comwrote: On Tuesday, June 22, 2010 3:37 PM, Arun Ranganathan wrote: On 6/22/10 8:44 AM, Adrian Bateman wrote: I think it makes more sense for the URL to be opaque and let user agents figure out the optimal way of implementing origin and other checks. I think it may be important to define: * Format. I agree that this could be something simple, but it should be defined. By opaque, do you mean undefined? * Behavior with GET. For this, I propose using a subset of HTTP/1.1 responses. I think we agree. I actually meant well-defined but opaque to JavaScript consumers. In other words script in a web page can't deduce any meaningful information from the string. If we're aiming for that property then it makes sense that the entire scheme be defined (something like filedata:----000). I agree with you Adrian that it makes sense to let the user agent figure out the optimal way of implementing origin and other checks. A logical step from that premise is that the choice/format of the namespace specific string should be left up to the UA as embedding information in there may be the optimal way for some UA's of implementing said checks, and it sounds like other UAs may not want to do that. dave
Re: FormData with sliced Blob
What about using a filename that is unique with repect to files sent in that FormData (but it is up to the UA)? For example, a UA may choose to do Blob1, Blob2, etc. For the content-type, application/octet-string seems most fitting. Here's the result applied to your example: --SomeBoundary... Content-Disposition: form-data; name=file; filename=Blob1 Content-Type: application/octet-string dave On Fri, Mar 19, 2010 at 6:25 PM, Jian Li jia...@google.com wrote: Hi, I have questions regarding sending FormData with sliced files. When we send a FormData with a regular file, we send out the multipart data for this file, like the following: --SomeBoundary... Content-Disposition: form-data; name=file; filename=test.js Content-Type: application/x-javascript ... However, when it is sliced into a blob, it does not have the file name and type information any more. I am wondering what we should send. Should we just not provide the filename and Content-Type information? Thanks, Jian
Re: Notifications
On Fri, Feb 12, 2010 at 5:06 AM, Henri Sivonen hsivo...@iki.fi wrote: FWIW, Microsoft explicitly says notifications must be ignorable and don't persist. Notifications aren't modal and don't require user interaction, so users can freely ignore them. In Windows Vista® and later, notifications are displayed for a fixed duration of 9 seconds. http://msdn.microsoft.com/en-us/library/aa511497.aspx There are several references to Windows notifications, and how they work as a guideline (ignorable, 9 seconds, etc.) It is worth noting that these have been around for some time and applications (im clients, some rss readers, etc.) that care about notification *don't* use them. So a very simple use case: email web app wants to alert you have new mail outside the frame, and allow the user to click on that alert and be taken to the inbox page. This does not work on NotifyOSD, because they explicitly don't support that part of the D-bus notifications spec. However, Growl would support this. If acknowledgement support is super-important to Web apps, surely it should be to native apps, too. The use case seems to be about allowing the user to take quick action on a notification. The growl, many mobile notifications (and the aforementioned windows notification system) allow the user to click notifications and be taken to a more in depth view of the information. It sounds like NotifyOSD is an outlier in this aspect. On Feb 11, 2010, at 00:10, Drew Wilson wrote: it seems like the utility of being able to put markup such as bold text, or graphics, or links in a notification should be self-evident, It's not self-evident. If it were, surely native apps would be bypassing NotifyOSD and Growl to get more bolded and linkified notifications. From reading the spec, it looks like Notify OSD supports two kinds of markup already, so it allows for bold, italic, links, underlines, images, and fonts (http://www.galago-project.org/specs/notification/0.9/x161.html, http://library.gnome.org/devel/pango/unstable/PangoMarkupFormat.html). So it makes headway in allowing for applications a richer display, but of course, this is arbitrarily limited and would cut out several scenarios that Drew and John have already mentioned. Lastly, it sounds like the any UA implementing the api being proposed could use these mechanisms (NotifyOSD, etc.) and just not provide as rich of an experience (as one would get from an html version). dave
Re: Notifications
On Fri, Feb 5, 2010 at 6:52 AM, Anne van Kesteren ann...@opera.com wrote: On Thu, 04 Feb 2010 00:36:26 +0100, John Gregg john...@google.com wrote: In the case the first notification from an application is an important one, that app should be able to request permission for out-of-tab notifications beforehand; Aren't notifications by nature somewhat non-important? Yes, but only if they aren't done well. I have notifications to remind me to go to meetings, or when someone is trying to reach me in irc. These are very important to me. oth, I may also have a notification about new email that is directly to me. It still may need my attention but is less important to me that the prior two. I don't want notifications about things that are unimportant to me.
Re: [XMLHttpRequest] withCredentials=false and returned cookies
It appears that both Safari and Firefox ignore returned cookies from a cross origin xhr when the credentials flag is set to false. This behavior seems very reasonable. Should the XMLHttpRequest level 2 spec indicate that this is the expected behavior? Dave On Thu, Jul 30, 2009 at 11:46 AM, David Levin le...@chromium.org wrote: In http://www.w3.org/TR/XMLHttpRequest2/#credentials, it says: The credentials flag ...indicates whether a non same origin request includes cookie and HTTP authentication data...during the send() algorithm. If withCredentials is false, it seems like the cookies returned from the request shouldn't be stored either, but I couldn't find mention of this. (Why should the cookies returned from this be stored and possibly interfere with same origin requests, especially if the cookies aren't being sent?) Is this true? thanks, dave
[XMLHttpRequest] withCredentials=false and returned cookies
In http://www.w3.org/TR/XMLHttpRequest2/#credentials, it says: The credentials flag ...indicates whether a non same origin request includes cookie and HTTP authentication data...during the send() algorithm. If withCredentials is false, it seems like the cookies returned from the request shouldn't be stored either, but I couldn't find mention of this. (Why should the cookies returned from this be stored and possibly interfere with same origin requests, especially if the cookies aren't being sent?) Is this true? thanks, dave
[xhr2] Redirect during send question
Regarding the http redirect security violation steps, the spec ( http://dev.w3.org/2006/webapi/XMLHttpRequest/) says If async is set to false raise a NETWORK_ERR exception and terminate the overall algorithm. I tried out IE7, Firefox 3, and WebKit nightlies and none of them seem to throw an exception in this case. Won't implementing the spec introduce a significant incompatibility? Or did I miss something? Thanks, Dave PS I'm asking because I doing a change in WebKit in this area.