Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Anne van Kesteren
On Wed, 07 Mar 2012 02:12:39 +0100, Feras Moussa fer...@microsoft.com  
wrote:

xhr.send(blob);
blob.close(); // method name TBD


In our implementation, this case would fail. We think this is reasonable  
because the
need for having a close() method is to allow deterministic release of  
the resource.


Reasonable or not, would fail is not something we can put in a standard.  
What happens exactly? What if a connection is established and data is  
being transmitted already?



--
Anne van Kesteren
http://annevankesteren.nl/



[CORS] Review of CORS and WebAppSec prior to LCWD

2012-03-07 Thread Arthur Barstow

[ + public-webappsec ]

Below is a comment about CORS.

Given the original CfC for LCWD was started months ago, perhaps this 
comment should be considered as an LC comment.


Re whether to use p-webapps or p-webappsec, I don't recall any agreement 
on that question. I do note the latest ED says to use webappsec but the 
latest version in /TR/ says to use webapps. Since WebAppSec has taken 
the lead on this spec and the ED now uses webappsec, that list does seem 
like the better choice.


-AB

On 3/6/12 1:48 PM, ext Adam Barth wrote:

This feedback is probably better addressed to public-webappsec, which
is the main forum for discussing these security-related specs.

Adam


On Tue, Mar 6, 2012 at 6:06 AM, Cameron Jonescmhjo...@gmail.com  wrote:

Hello wg,

I've been contemplating various aspects of web app security in review
of CORS, UMP, CSP and resource protection and would like to bring some
feedback to the group as a set of concerns over the combined proposed
solution.

While the pre-cfc calls for positive response i can only provide the
review in the light that it is seen. I hereby highlight concerns, of
which some have been previously raised, and some which have lead to
further specification resulting in a combined solution which, i
believe, has scope for conglomeration into a more concise set of
recommendations for implementation.

This review and feedback is predominantly related to CORS as a
solution for resource sharing. To step back from this initially and
scope the problem which it proposes to address, this specification is
a solution to expanding XHR access control across origins due to the
implied requirement that resources must be proactively protected from
CSRF in combination with the exposure to 'ambient authority'.

This premise requires examination due to its becoming existence
through proprietary development and release further to wider
acceptance and adoption over a number of years. To look at the history
of XHR in this regard, the requirements for its mode of operation have
been driven by proprietary specification and while there are no
concerns over independent development and product offering, the
expansion toward global recommendation and deployment garners greater
scope and requirements for global integration within a more permanent
and incumbent environment.

The principal concerns over XHR, and which are in part being addressed
within UMP, are in relation to the incurred requirement of CORS due to
the imposed security policy. This security policy is in response to
the otherwise unspecified functional requirement of 'ambient
authority' within HTTP agents. Cookies and HTTP authentication, which
comprise the 'authority', are shared within a single HTTP agent which
supports multiple HTTP communication interaction through distinct user
interface. This multi-faceted interface represents the 'ambient'
environment. The question this raises with regards to the premise of
XHR and CSRF protection is therefore; why does the HTTP agent
introduce the problem of 'ambient authority' which must subsequently
be solved?

To examine this further we can look at the consequent definition of
the 'origin' with respect to HTTP requests. This definition is
introduced to resolve the problem that HTTP 'authority' was only
intending to be granted to requests originating from same-origin
sources. This extension to subsequent requests may be withdrawn back
to the originating request and reframed by examining - why is HTTP
'authority' shared between cross-origin browsing contexts?

To highlight this with a specific example, a user initiates a new
browsing session with http://www.example.com whereby cookies are set
and the user logs in using HTTP authentication. In a separate
workflow, the user instructs the UA to open a new UI and initiate a
new browsing session with http://www.test.com which happens to include
resources (images, scripts etc) from the www.example.com domain. Where
in these two separate user tasks has either the server of
www.example.com, or the user, or even www.test.com, declared that they
want to intertleave - or share - the independent browsing contexts
consisting of HTTP authority? This is primary leak of security and
privacy which is the root of all further breaches and vulnerabilities.

When 'ambient authority' is removed from the definition of
cross-origin requests, as in UMP, the potential for CSRF is
eliminated, with the exception of public web services which are
postulated to be at risk due to an assumption that the public web
service was intended only for same-origin operation. This further
premise is flawed due to its incompatibility with the architecture of
the WWW and HTTP as a globally deployed and publicly accessible
system.

To summarize this review as a recommendation, it is viewed that the
problem of CRSF is better addressed through the restriction of imposed
sharing of 'ambient authority' which amounts to a breach of trust
between the server and UA and the UA and the user.


Re: [CORS] Review of CORS and WebAppSec prior to LCWD

2012-03-07 Thread Cameron Jones
thanks for the attention and reposting to public-webappsec.

i agree the relevant scope is this list but as this review\comments
was prompted by explicitly trying to keep within process milestones of
CORS i thought to keep the response to the list where notification was
sent. glad it has found the right audience.

thanks,
Cameron Jones

On Wed, Mar 7, 2012 at 1:39 PM, Arthur Barstow art.bars...@nokia.com wrote:
 [ + public-webappsec ]

 Below is a comment about CORS.

 Given the original CfC for LCWD was started months ago, perhaps this comment
 should be considered as an LC comment.

 Re whether to use p-webapps or p-webappsec, I don't recall any agreement on
 that question. I do note the latest ED says to use webappsec but the latest
 version in /TR/ says to use webapps. Since WebAppSec has taken the lead on
 this spec and the ED now uses webappsec, that list does seem like the better
 choice.

 -AB


 On 3/6/12 1:48 PM, ext Adam Barth wrote:

 This feedback is probably better addressed to public-webappsec, which
 is the main forum for discussing these security-related specs.

 Adam


 On Tue, Mar 6, 2012 at 6:06 AM, Cameron Jonescmhjo...@gmail.com  wrote:

 Hello wg,

 I've been contemplating various aspects of web app security in review
 of CORS, UMP, CSP and resource protection and would like to bring some
 feedback to the group as a set of concerns over the combined proposed
 solution.

 While the pre-cfc calls for positive response i can only provide the
 review in the light that it is seen. I hereby highlight concerns, of
 which some have been previously raised, and some which have lead to
 further specification resulting in a combined solution which, i
 believe, has scope for conglomeration into a more concise set of
 recommendations for implementation.

 This review and feedback is predominantly related to CORS as a
 solution for resource sharing. To step back from this initially and
 scope the problem which it proposes to address, this specification is
 a solution to expanding XHR access control across origins due to the
 implied requirement that resources must be proactively protected from
 CSRF in combination with the exposure to 'ambient authority'.

 This premise requires examination due to its becoming existence
 through proprietary development and release further to wider
 acceptance and adoption over a number of years. To look at the history
 of XHR in this regard, the requirements for its mode of operation have
 been driven by proprietary specification and while there are no
 concerns over independent development and product offering, the
 expansion toward global recommendation and deployment garners greater
 scope and requirements for global integration within a more permanent
 and incumbent environment.

 The principal concerns over XHR, and which are in part being addressed
 within UMP, are in relation to the incurred requirement of CORS due to
 the imposed security policy. This security policy is in response to
 the otherwise unspecified functional requirement of 'ambient
 authority' within HTTP agents. Cookies and HTTP authentication, which
 comprise the 'authority', are shared within a single HTTP agent which
 supports multiple HTTP communication interaction through distinct user
 interface. This multi-faceted interface represents the 'ambient'
 environment. The question this raises with regards to the premise of
 XHR and CSRF protection is therefore; why does the HTTP agent
 introduce the problem of 'ambient authority' which must subsequently
 be solved?

 To examine this further we can look at the consequent definition of
 the 'origin' with respect to HTTP requests. This definition is
 introduced to resolve the problem that HTTP 'authority' was only
 intending to be granted to requests originating from same-origin
 sources. This extension to subsequent requests may be withdrawn back
 to the originating request and reframed by examining - why is HTTP
 'authority' shared between cross-origin browsing contexts?

 To highlight this with a specific example, a user initiates a new
 browsing session with http://www.example.com whereby cookies are set
 and the user logs in using HTTP authentication. In a separate
 workflow, the user instructs the UA to open a new UI and initiate a
 new browsing session with http://www.test.com which happens to include
 resources (images, scripts etc) from the www.example.com domain. Where
 in these two separate user tasks has either the server of
 www.example.com, or the user, or even www.test.com, declared that they
 want to intertleave - or share - the independent browsing contexts
 consisting of HTTP authority? This is primary leak of security and
 privacy which is the root of all further breaches and vulnerabilities.

 When 'ambient authority' is removed from the definition of
 cross-origin requests, as in UMP, the potential for CSRF is
 eliminated, with the exception of public web services which are
 postulated to be at risk due to an assumption 

Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Kenneth Russell
On Tue, Mar 6, 2012 at 6:29 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Mar 6, 2012 at 4:24 PM, Michael Nordman micha...@google.com wrote:

  You can always call close() yourself, but Blob.close() should use the
  neuter mechanism already there, not make up a new one.

 Blobs aren't transferable, there is no existing mechanism that applies
 to them. Adding a blob.close() method is independent of making blob's
 transferable, the former is not prerequisite on the latter.


 There is an existing mechanism for closing objects.  It's called
 neutering.  Blob.close should use the same terminology, whether or not the
 object is a Transferable.

 On Tue, Mar 6, 2012 at 4:25 PM, Kenneth Russell k...@google.com wrote:

 I would be hesitant to impose a close() method on all future
 Transferable types.


 Why?  All Transferable types must define how to neuter objects; all close()
 does is trigger it.

 I don't think adding one to ArrayBuffer would be a
 bad idea but I think that ideally it wouldn't be necessary. On memory
 constrained devices, it would still be more efficient to re-use large
 ArrayBuffers rather than close them and allocate new ones.


 That's often not possible, when the ArrayBuffer is returned to you from an
 API (eg. XHR2).

 This sounds like a good idea. As you pointed out offline, a key
 difference between Blobs and ArrayBuffers is that Blobs are always
 immutable. It isn't necessary to define Transferable semantics for
 Blobs in order to post them efficiently, but it was essential for
 ArrayBuffers.


 No new semantics need to be defined; the semantics of Transferable are
 defined by postMessage and are the same for all transferable objects.
 That's already done.  The only thing that needs to be defined is how to
 neuter an object, which is what Blob.close() has to define anyway.

 Using Transferable for Blob will allow Blobs, ArrayBuffers, and any future
 large, structured clonable objects to all be released with the same
 mechanisms: either pass them in the transfer argument to a postMessage
 call, or use the consistent, identical close() method inherited from
 Transferable.  This allows developers to think of the transfer list as a
 list of objects which won't be needed after the postMessage call.  It
 doesn't matter that the underlying optimizations are different; the visible
 side-effects are identical (the object can no longer be accessed).

Closing an object, and neutering it because it was transferred to a
different owner, are different concepts. It's already been
demonstrated that Blobs, being read-only, do not need to be
transferred in order to send them efficiently from one owner to
another. It's also been demonstrated that Blobs can be resource
intensive and that an explicit closing mechanism is needed.

I believe that we should fix the immediate problem and add a close()
method to Blob. I'm not in favor of adding a similar method to
ArrayBuffer at this time and therefore not to Transferable. There is a
high-level goal to keep the typed array specification as minimal as
possible, and having Transferable support leak in to the public
methods of the interfaces contradicts that goal.

-Ken



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Eric U
On Wed, Mar 7, 2012 at 11:38 AM, Kenneth Russell k...@google.com wrote:
 On Tue, Mar 6, 2012 at 6:29 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Mar 6, 2012 at 4:24 PM, Michael Nordman micha...@google.com wrote:

  You can always call close() yourself, but Blob.close() should use the
  neuter mechanism already there, not make up a new one.

 Blobs aren't transferable, there is no existing mechanism that applies
 to them. Adding a blob.close() method is independent of making blob's
 transferable, the former is not prerequisite on the latter.


 There is an existing mechanism for closing objects.  It's called
 neutering.  Blob.close should use the same terminology, whether or not the
 object is a Transferable.

 On Tue, Mar 6, 2012 at 4:25 PM, Kenneth Russell k...@google.com wrote:

 I would be hesitant to impose a close() method on all future
 Transferable types.


 Why?  All Transferable types must define how to neuter objects; all close()
 does is trigger it.

 I don't think adding one to ArrayBuffer would be a
 bad idea but I think that ideally it wouldn't be necessary. On memory
 constrained devices, it would still be more efficient to re-use large
 ArrayBuffers rather than close them and allocate new ones.


 That's often not possible, when the ArrayBuffer is returned to you from an
 API (eg. XHR2).

 This sounds like a good idea. As you pointed out offline, a key
 difference between Blobs and ArrayBuffers is that Blobs are always
 immutable. It isn't necessary to define Transferable semantics for
 Blobs in order to post them efficiently, but it was essential for
 ArrayBuffers.


 No new semantics need to be defined; the semantics of Transferable are
 defined by postMessage and are the same for all transferable objects.
 That's already done.  The only thing that needs to be defined is how to
 neuter an object, which is what Blob.close() has to define anyway.

 Using Transferable for Blob will allow Blobs, ArrayBuffers, and any future
 large, structured clonable objects to all be released with the same
 mechanisms: either pass them in the transfer argument to a postMessage
 call, or use the consistent, identical close() method inherited from
 Transferable.  This allows developers to think of the transfer list as a
 list of objects which won't be needed after the postMessage call.  It
 doesn't matter that the underlying optimizations are different; the visible
 side-effects are identical (the object can no longer be accessed).

 Closing an object, and neutering it because it was transferred to a
 different owner, are different concepts. It's already been
 demonstrated that Blobs, being read-only, do not need to be
 transferred in order to send them efficiently from one owner to
 another. It's also been demonstrated that Blobs can be resource
 intensive and that an explicit closing mechanism is needed.

 I believe that we should fix the immediate problem and add a close()
 method to Blob. I'm not in favor of adding a similar method to
 ArrayBuffer at this time and therefore not to Transferable. There is a
 high-level goal to keep the typed array specification as minimal as
 possible, and having Transferable support leak in to the public
 methods of the interfaces contradicts that goal.

This makes sense to me.  Blob needs close independent of whether it's
in Transferable, and Blob has no need to be Transferable, so let's not
mix the two.



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Charles Pritchard

On Mar 7, 2012, at 11:38 AM, Kenneth Russell k...@google.com wrote:

 On Tue, Mar 6, 2012 at 6:29 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Mar 6, 2012 at 4:24 PM, Michael Nordman micha...@google.com wrote:
 
 You can always call close() yourself, but Blob.close() should use the
 neuter mechanism already there, not make up a new one.
 
 Blobs aren't transferable, there is no existing mechanism that applies
 to them. Adding a blob.close() method is independent of making blob's
 transferable, the former is not prerequisite on the latter.
 
 
 There is an existing mechanism for closing objects.  It's called
 neutering.  Blob.close should use the same terminology, whether or not the
 object is a Transferable.
 
 On Tue, Mar 6, 2012 at 4:25 PM, Kenneth Russell k...@google.com wrote:
 
 I would be hesitant to impose a close() method on all future
 Transferable types.
 
 
 Why?  All Transferable types must define how to neuter objects; all close()
 does is trigger it.
 
 I don't think adding one to ArrayBuffer would be a
 bad idea but I think that ideally it wouldn't be necessary. On memory
 constrained devices, it would still be more efficient to re-use large
 ArrayBuffers rather than close them and allocate new ones.
 
 
 That's often not possible, when the ArrayBuffer is returned to you from an
 API (eg. XHR2).
 
 This sounds like a good idea. As you pointed out offline, a key
 difference between Blobs and ArrayBuffers is that Blobs are always
 immutable. It isn't necessary to define Transferable semantics for
 Blobs in order to post them efficiently, but it was essential for
 ArrayBuffers.
 
 
 No new semantics need to be defined; the semantics of Transferable are
 defined by postMessage and are the same for all transferable objects.
 That's already done.  The only thing that needs to be defined is how to
 neuter an object, which is what Blob.close() has to define anyway.
 
 Using Transferable for Blob will allow Blobs, ArrayBuffers, and any future
 large, structured clonable objects to all be released with the same
 mechanisms: either pass them in the transfer argument to a postMessage
 call, or use the consistent, identical close() method inherited from
 Transferable.  This allows developers to think of the transfer list as a
 list of objects which won't be needed after the postMessage call.  It
 doesn't matter that the underlying optimizations are different; the visible
 side-effects are identical (the object can no longer be accessed).
 
 Closing an object, and neutering it because it was transferred to a
 different owner, are different concepts. It's already been
 demonstrated that Blobs, being read-only, do not need to be
 transferred in order to send them efficiently from one owner to
 another. It's also been demonstrated that Blobs can be resource
 intensive and that an explicit closing mechanism is needed.
 
 I believe that we should fix the immediate problem and add a close()
 method to Blob. I'm not in favor of adding a similar method to
 ArrayBuffer at this time and therefore not to Transferable. There is a
 high-level goal to keep the typed array specification as minimal as
 possible, and having Transferable support leak in to the public
 methods of the interfaces contradicts that goal.

I think there's broad enough consensus amongst vendors to table the discussion 
about adding close to Transferable.

Would you please let me know why ypu believe ArrayBuffer should not have a 
close method?

I would like some clarity here. The Typed Array spec would not be cluttered by 
the addition of a simple close method.

I work much more with ArrayBuffer than Blob. I suspect others will too as they 
progress with more advanced and resource intensive applications.

What is the use-case distinction between close of immutable blob and close of a 
mutable buffer?

-Charles


Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Greg Billock
On Tue, Mar 6, 2012 at 1:18 PM, Kenneth Russell k...@google.com wrote:
 On Tue, Mar 6, 2012 at 12:04 PM, Greg Billock gbill...@google.com wrote:
 On Mon, Mar 5, 2012 at 6:46 PM, Charles Pritchard ch...@jumis.com wrote:
 On 3/5/2012 5:56 PM, Glenn Maynard wrote:

 On Mon, Mar 5, 2012 at 7:04 PM, Charles Pritchard ch...@jumis.com wrote:

 Do you see old behavior working something like the following?


 var blob = new Blob(my new big blob);
 var keepBlob = blob.slice(); destination.postMessage(blob, '*', [blob]);
 // is try/catch needed here?


 You don't need to do that.  If you don't want postMessage to transfer the
 blob, then simply don't include it in the transfer parameter, and it'll
 perform a normal structured clone.  postMessage behaves this way in part for
 backwards-compatibility: so exactly in cases like this, we can make Blob
 implement Transferable without breaking existing code.

 See http://dev.w3.org/html5/postmsg/#posting-messages and similar
 postMessage APIs.


 Web Intents won't have a transfer map argument.
 http://dvcs.w3.org/hg/web-intents/raw-file/tip/spec/Overview.html#widl-Intent-data

 For the Web Intents structured cloning algorithm, Web Intents would be
 inserting into step 3:
     If input is a Transferable object, add it to the transfer map.
 http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#internal-structured-cloning-algorithm

 Then Web Intents would move the first section of the structured cloning
 algorithm to follow the internal cloning algorithm section, swapping their
 order.
 http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#safe-passing-of-structured-data

 That's my understanding.

 We've been discussing the merits of this approach vs using a transfer
 array argument. There's a lot to like about this alternative -- it
 conserves arguments and looks simpler than the transfer map, as well
 as not having the headaches of whether you can do (null, [port]) or
 (port, [port]) and concerns like that.

 The advantage of using the transfer map param is that it is more
 contiguous with existing practice. We'd kind of hoped that this
 particular debate was finalized before we got to the point of needing
 to make a decision, so we bluffed and left it out of the web intents
 spec draft. :-) At this point, I'm leaning toward needing to add a
 transfer map parameter, and then dealing with that alongside other
 uses, given the state of thinking on Transferables support and the
 need to make this pretty consistent across structure clone
 invocations.

 I do think that complexity might be better solved by the type system
 (i.e. a new Transferable(ArrayBuffer)), which would require a
 different developer mechanic to set up clone vs transfer, but would
 relieve complexity in the invocation of structured clone itself:
 transferables could just always transfer transparently. I don't know
 if, given current practice with MessagePort, that kind of solution is
 available.

 A change like this would be feasible as long as it doesn't break
 compatibility. In other words, the current Transferable array would
 still need to be supported, but Transferable instances (or perhaps
 instances of some other type) wrapping another Transferable object
 would also express the intent.

 The current API for Transferable and postMessage was informed by the
 realization that the previous sequenceMessagePort argument to
 postMessage was essentially already expressing the Transferable
 concept.

 I'm not familiar with the Web Intents API, but at first glance it
 seems feasible to overload the constructor, postResult and postFailure
 methods to support passing a sequenceTransferable as the last
 argument. This would make the API look more like postMessage and avoid
 adding more transfer semantics. Is that possible?

Yes. That's our current plan.

-Greg



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Kenneth Russell
On Wed, Mar 7, 2012 at 12:02 PM, Charles Pritchard ch...@jumis.com wrote:

 On Mar 7, 2012, at 11:38 AM, Kenneth Russell k...@google.com wrote:

 On Tue, Mar 6, 2012 at 6:29 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Mar 6, 2012 at 4:24 PM, Michael Nordman micha...@google.com wrote:

 You can always call close() yourself, but Blob.close() should use the
 neuter mechanism already there, not make up a new one.

 Blobs aren't transferable, there is no existing mechanism that applies
 to them. Adding a blob.close() method is independent of making blob's
 transferable, the former is not prerequisite on the latter.


 There is an existing mechanism for closing objects.  It's called
 neutering.  Blob.close should use the same terminology, whether or not the
 object is a Transferable.

 On Tue, Mar 6, 2012 at 4:25 PM, Kenneth Russell k...@google.com wrote:

 I would be hesitant to impose a close() method on all future
 Transferable types.


 Why?  All Transferable types must define how to neuter objects; all close()
 does is trigger it.

 I don't think adding one to ArrayBuffer would be a
 bad idea but I think that ideally it wouldn't be necessary. On memory
 constrained devices, it would still be more efficient to re-use large
 ArrayBuffers rather than close them and allocate new ones.


 That's often not possible, when the ArrayBuffer is returned to you from an
 API (eg. XHR2).

 This sounds like a good idea. As you pointed out offline, a key
 difference between Blobs and ArrayBuffers is that Blobs are always
 immutable. It isn't necessary to define Transferable semantics for
 Blobs in order to post them efficiently, but it was essential for
 ArrayBuffers.


 No new semantics need to be defined; the semantics of Transferable are
 defined by postMessage and are the same for all transferable objects.
 That's already done.  The only thing that needs to be defined is how to
 neuter an object, which is what Blob.close() has to define anyway.

 Using Transferable for Blob will allow Blobs, ArrayBuffers, and any future
 large, structured clonable objects to all be released with the same
 mechanisms: either pass them in the transfer argument to a postMessage
 call, or use the consistent, identical close() method inherited from
 Transferable.  This allows developers to think of the transfer list as a
 list of objects which won't be needed after the postMessage call.  It
 doesn't matter that the underlying optimizations are different; the visible
 side-effects are identical (the object can no longer be accessed).

 Closing an object, and neutering it because it was transferred to a
 different owner, are different concepts. It's already been
 demonstrated that Blobs, being read-only, do not need to be
 transferred in order to send them efficiently from one owner to
 another. It's also been demonstrated that Blobs can be resource
 intensive and that an explicit closing mechanism is needed.

 I believe that we should fix the immediate problem and add a close()
 method to Blob. I'm not in favor of adding a similar method to
 ArrayBuffer at this time and therefore not to Transferable. There is a
 high-level goal to keep the typed array specification as minimal as
 possible, and having Transferable support leak in to the public
 methods of the interfaces contradicts that goal.

 I think there's broad enough consensus amongst vendors to table the 
 discussion about adding close to Transferable.

 Would you please let me know why ypu believe ArrayBuffer should not have a 
 close method?

 I would like some clarity here. The Typed Array spec would not be cluttered 
 by the addition of a simple close method.

It's certainly a matter of opinion -- but while it's only the addition
of one method, it changes typed arrays' semantics to be much closer to
manual memory allocation than they currently are. It would be a
further divergence in behavior from ordinary ECMAScript arrays.

The TC39 working group, I have heard, is incorporating typed arrays
into the language specification, and for this reason I believe extreme
care is warranted when adding more functionality to the typed array
spec. The spec can certainly move forward, but personally I'd like to
check with TC39 on semantic changes like this one. That's the
rationale behind my statement above about preferring not to add this
method at this time.

-Ken


 I work much more with ArrayBuffer than Blob. I suspect others will too as 
 they progress with more advanced and resource intensive applications.

 What is the use-case distinction between close of immutable blob and close of 
 a mutable buffer?

 -Charles



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Charles Pritchard

On 3/7/12 12:34 PM, Kenneth Russell wrote:

On Wed, Mar 7, 2012 at 12:02 PM, Charles Pritchardch...@jumis.com  wrote:

On Mar 7, 2012, at 11:38 AM, Kenneth Russellk...@google.com  wrote:


I believe that we should fix the immediate problem and add a close()
method to Blob. I'm not in favor of adding a similar method to
ArrayBuffer at this time and therefore not to Transferable. There is a
high-level goal to keep the typed array specification as minimal as
possible, and having Transferable support leak in to the public
methods of the interfaces contradicts that goal.

I think there's broad enough consensus amongst vendors to table the discussion 
about adding close to Transferable.

Would you please let me know why ypu believe ArrayBuffer should not have a 
close method?

I would like some clarity here. The Typed Array spec would not be cluttered by 
the addition of a simple close method.

It's certainly a matter of opinion -- but while it's only the addition
of one method, it changes typed arrays' semantics to be much closer to
manual memory allocation than they currently are. It would be a
further divergence in behavior from ordinary ECMAScript arrays.

The TC39 working group, I have heard, is incorporating typed arrays
into the language specification, and for this reason I believe extreme
care is warranted when adding more functionality to the typed array
spec. The spec can certainly move forward, but personally I'd like to
check with TC39 on semantic changes like this one. That's the
rationale behind my statement above about preferring not to add this
method at this time.


Searching through the net tells me that this has been a rumor for years.

I agree with taking extreme care -- so let's isolate one more bit of 
information:


Is ArrayBuffer being proposed for TC39 incorporation, or is it only the 
Typed Arrays? The idea here is to alter ArrayBuffer, an object which can 
be neutered via transfer map. It seems a waste to have to create a 
Worker to close down buffer views.


Will TC39 have anything to say about the neuter concept and/or Web 
Messaging?



Again, I'm bringing this up for the same practical experience that 
Blob.close() was brought up. I do appreciate that read/write allocation 
is a separate semantic from write-once/read-many allocation.


I certainly don't want to derail the introduction of Typed Array into 
TC39. I don't want to sit back for two years either, while the 
ArrayBuffer object is in limbo.


If necessary, I'll do some of the nasty test work of creating a worker 
simply to destroy buffers, and report back on it.

var worker = new Worker('trash.js');
worker.postMessage(null,[bufferToClose]);
worker.close();
vs.
bufferToClose.close();



-Charles



Re: Transferable and structured clones, was: Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Kenneth Russell
On Wed, Mar 7, 2012 at 1:00 PM, Charles Pritchard ch...@jumis.com wrote:
 On 3/7/12 12:34 PM, Kenneth Russell wrote:

 On Wed, Mar 7, 2012 at 12:02 PM, Charles Pritchardch...@jumis.com
  wrote:

 On Mar 7, 2012, at 11:38 AM, Kenneth Russellk...@google.com  wrote:

 I believe that we should fix the immediate problem and add a close()
 method to Blob. I'm not in favor of adding a similar method to
 ArrayBuffer at this time and therefore not to Transferable. There is a
 high-level goal to keep the typed array specification as minimal as
 possible, and having Transferable support leak in to the public
 methods of the interfaces contradicts that goal.

 I think there's broad enough consensus amongst vendors to table the
 discussion about adding close to Transferable.

 Would you please let me know why ypu believe ArrayBuffer should not have
 a close method?

 I would like some clarity here. The Typed Array spec would not be
 cluttered by the addition of a simple close method.

 It's certainly a matter of opinion -- but while it's only the addition
 of one method, it changes typed arrays' semantics to be much closer to
 manual memory allocation than they currently are. It would be a
 further divergence in behavior from ordinary ECMAScript arrays.

 The TC39 working group, I have heard, is incorporating typed arrays
 into the language specification, and for this reason I believe extreme
 care is warranted when adding more functionality to the typed array
 spec. The spec can certainly move forward, but personally I'd like to
 check with TC39 on semantic changes like this one. That's the
 rationale behind my statement above about preferring not to add this
 method at this time.


 Searching through the net tells me that this has been a rumor for years.

Regardless of rumors I have talked to multiple members of TC39 who
have clearly stated it is being incorporated into ES6 Harmony.

 I agree with taking extreme care -- so let's isolate one more bit of
 information:

 Is ArrayBuffer being proposed for TC39 incorporation, or is it only the
 Typed Arrays? The idea here is to alter ArrayBuffer, an object which can be
 neutered via transfer map. It seems a waste to have to create a Worker to
 close down buffer views.

Both ArrayBuffer and the typed array views will be incorporated.

 Will TC39 have anything to say about the neuter concept and/or Web
 Messaging?

This is an excellent question and one which I've also posed to TC39. I
don't see how the language spec could reference these concepts. I'm
guessing that this is an area that TC39 hasn't yet figured out,
either.

 Again, I'm bringing this up for the same practical experience that
 Blob.close() was brought up. I do appreciate that read/write allocation is a
 separate semantic from write-once/read-many allocation.

 I certainly don't want to derail the introduction of Typed Array into TC39.
 I don't want to sit back for two years either, while the ArrayBuffer object
 is in limbo.

Understood and appreciated.

 If necessary, I'll do some of the nasty test work of creating a worker
 simply to destroy buffers, and report back on it.
 var worker = new Worker('trash.js');
 worker.postMessage(null,[bufferToClose]);
 worker.close();
 vs.
 bufferToClose.close();

I doubt that that will work. Garbage collection will still need to run
in the worker's JavaScript context in order for the transferred
ArrayBuffer to be cleaned up, and I doubt that happens eagerly upon
shutdown of the worker. Would be happy to be proven wrong.

If you prototype adding ArrayBuffer.close() in your open source
browser of choice and report back on significant efficiency
improvements in a real-world use case, that would be valuable
feedback.

-Ken



Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Eric U
On Tue, Mar 6, 2012 at 5:12 PM, Feras Moussa fer...@microsoft.com wrote:
 From: Arun Ranganathan [mailto:aranganat...@mozilla.com]
 Sent: Tuesday, March 06, 2012 1:27 PM
 To: Feras Moussa
 Cc: Adrian Bateman; public-webapps@w3.org; Ian Hickson; Anne van Kesteren
 Subject: Re: [FileAPI] Deterministic release of Blob proposal

 Feras,

 In practice, I think this is important enough and manageable enough to 
 include in the spec., and I'm willing to slow the train down if necessary, 
 but I'd like to understand a few things first.  Below:
 
  At TPAC we discussed the ability to deterministically close blobs with a 
  few
  others.
   
  As we’ve discussed in the createObjectURL thread[1], a Blob may represent
  an expensive resource (eg. expensive in terms of memory, battery, or disk
  space). At present there is no way for an application to deterministically
  release the resource backing the Blob. Instead, an application must rely on
  the resource being cleaned up through a non-deterministic garbage collector
  once all references have been released. We have found that not having a way
  to deterministically release the resource causes a performance impact for a
  certain class of applications, and is especially important for mobile 
  applications
  or devices with more limited resources.
 
  In particular, we’ve seen this become a problem for media intensive 
  applications
  which interact with a large number of expensive blobs. For example, a 
  gallery
  application may want to cycle through displaying many large images 
  downloaded
  through websockets, and without a deterministic way to immediately release
  the reference to each image Blob, can easily begin to consume vast amounts 
  of
  resources before the garbage collector is executed.
   
  To address this issue, we propose that a close method be added to the Blob
  interface.
  When called, the close method should release the underlying resource of the
  Blob, and future operations on the Blob will return a new error, a 
  ClosedError.
  This allows an application to signal when it's finished using the Blob.
 

 Do you agree that Transferable
 (http://dev.w3.org/html5/spec/Overview.html#transferable-objects) seems to 
 be what
 we're looking for, and that Blob should implement Transferable?

 Transferable addresses the use case of copying across threads, and neuters 
 the source
 object (though honestly, the word neuter makes me wince -- naming is a 
 problem on the
 web).  We can have a more generic method on Transferable that serves our 
 purpose here,
 rather than *.close(), and Blob can avail of that.  This is something we can 
 work out with HTML,
 and might be the right thing to do for the platform (although this creates 
 something to think
 about for MessagePort and for ArrayBuffer, which also implement 
 Transferable).

 I agree with your changes, but am confused by some edge cases:
 To support this change, the following changes in the File API spec are 
 needed:
 
 * In section 6 (The Blob Interface)
  - Addition of a close method. When called, the close method releases the
 underlying resource of the Blob. Close renders the blob invalid, and further
 operations such as URL.createObjectURL or the FileReader read methods on
 the closed blob will fail and return a ClosedError.  If there are any 
 non-revoked
 URLs to the Blob, these URLs will continue to resolve until they have been
 revoked.
  - For the slice method, state that the returned Blob is a new Blob with 
its own
 lifetime semantics – calling close on the new Blob is independent of 
 calling close
 on the original Blob.

 *In section 8 (The FIleReader Interface)
 - State the FileReader reads directly over the given Blob, and not a copy 
 with
 an independent lifetime.

 * In section 10 (Errors and Exceptions)
 - Addition of a ClosedError. If the File or Blob has had the close method 
 called,
 then for asynchronous read methods the error attribute MUST return a
 “ClosedError” DOMError and synchronous read methods MUST throw a
 ClosedError exception.

 * In section 11.8 (Creating and Revoking a Blob URI)
 - For createObjectURL – If this method is called with a closed Blob 
 argument,
 then user agents must throw a ClosedError exception.

 Similarly to how slice() clones the initial Blob to return one with its own
 independent lifetime, the same notion will be needed in other APIs which
 conceptually clone the data – namely FormData, any place the Structured 
 Clone
 Algorithm is used, and BlobBuilder.
 Similarly to how FileReader must act directly on the Blob’s data, the same 
 notion
 will be needed in other APIs which must act on the data - namely XHR.send 
 and
 WebSocket. These APIs will need to throw an error if called on a Blob that 
 was
 closed and the resources are released.

 So Blob.slice() already presumes a new Blob, but I can certainly make this 
 clearer.
 And I agree with the changes above, including the addition of 

RE: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Feras Moussa
 Then let's try this again.

 var a = new Image();
 a.onerror = function() { console.log(Oh no, my parent was neutered!); }; 
 a.src = URL.createObjectURL(blob); blob.close();

 Is that error going to hit?
I documented this in my proposal, but in this case the URI would have 
been minted prior to calling close. The Blob URI would still resolve 
until it has been revoked, so in your example onerror would not be hit 
due to calling close.

 var a = new Worker('#');
 a.postMessage(blob);
 blob.close();

 Is that blob going to make it to the worker?
SCA runs synchronously (so that subsequent changes to mutable values 
in the object don't impact the message) so the blob will have been 
cloned prior to close. 
The above would work as expected.


RE: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Feras Moussa
 -Original Message-
 From: Anne van Kesteren [mailto:ann...@opera.com] 
 Sent: Wednesday, March 07, 2012 12:49 AM
 To: Arun Ranganathan; Feras Moussa
 Cc: Adrian Bateman; public-webapps@w3.org; Ian Hickson
 Subject: Re: [FileAPI] Deterministic release of Blob proposal

 On Wed, 07 Mar 2012 02:12:39 +0100, Feras Moussa fer...@microsoft.com
 wrote:
  xhr.send(blob);
  blob.close(); // method name TBD
 
  In our implementation, this case would fail. We think this is 
  reasonable because the need for having a close() method is to allow 
  deterministic release of the resource.

 Reasonable or not, would fail is not something we can put in a standard.  
 What happens exactly? What if a connection is established and data is being 
 transmitted already?
In the case where close was called on a Blob that is being used in a 
pending request, then the request should be canceled. The expected 
result is the same as if abort() was called.


Re: [FileAPI] Deterministic release of Blob proposal

2012-03-07 Thread Charles Pritchard

On 3/7/12 3:56 PM, Feras Moussa wrote:

Then let's try this again.

var a = new Image();
a.onerror = function() { console.log(Oh no, my parent was neutered!); }; 
a.src = URL.createObjectURL(blob); blob.close();

Is that error going to hit?

until it has been revoked, so in your example onerror would not be hit
due to calling close.

var a = new Worker('#');
a.postMessage(blob);
blob.close();

The above would work as expected.


Well that all makes sense; so speaking for myself, I'm still confused 
about this one thing:



 xhr.send(blob);
 blob.close(); // method name TBD



 In our implementation, this case would fail. We think this is reasonable 
because the



So you want this to be a situation where we monitor progress events of 
XHR before releasing the blob?
It seems feasible to monitor the upload progress, but it is a little 
awkward.


-Charles