On 22/01/2008, at 8:59 PM, Anne van Kesteren wrote:

On Tue, 22 Jan 2008 04:56:52 +0100, Mark Nottingham <[EMAIL PROTECTED] > wrote:
1) The Method-Check protocol has potential for bad interactions with the HTTP caching model.

Consider clients C and C', both using a caching intermediary I, and accessing resources on an origin server S. If C does not support this extension, it will send a normal GET request for a resource on S, whose response may be cached on I. S may choose to not send an Access-Control header for that response, since it wasn't "asked for." If C' does support this extension, it will retrieve the original response (intended for C) from I, even though it appended the Method-Check header to a request, and will be led to believe that the resource on S doesn't support cross-site requests.

Three different solutions come to mind immediately;
a) require all responses to carry an Access-Control directive, whether or not the request contained a Method-Check header, or b) require all responses to carry a Vary: Method-Check header, whether or not the request contained a Method-Check header, or c) remove the Method-Check request header from the protocol, and require an Access-Control directive in all GET responses.

My preference would be (c), because...

I don't understand this comment. The authorization request is the only request that uses the Method-Check HTTP header and that request uses the OPTIONS HTTP method.

Ah, I missed this change in the ED (but now do remember people talking about it). Good, that obviates a lot of the issues.


[...] Separate from the server-side vs. client-side policy enforcement issue (which I'm not bringing up here explicitly, since it's an open issue AFAICT, although the WG doesn't link to its issues list from its home page), the Working Group needs to motivate the decision to have access control policy only apply on a per-resource basis, rather than per resource tree, or site-wide.

It's not an open issue.

Let's have one, then. The W3C has already solved the problem of site- wide metadata once, and there should be *some* reason for taking a different path this time.


Overall, this approach doesn't seem well-integrated into the Web, or even friendly to it; it's more of a hack, which is puzzling, since it requires clients to change anyway.

I don't really understand this. Changing clients is cheap compared to changing all the servers out there.

Spoken like a true browser vendor. The thing is, it's not necessary to change all of the servers; anyone who's sufficiently motivated to publish cross-site data can get their server updated, modified, or move to a new one easily. OTOH they have *no* power to update their users' browsers (unless they're in an especially iron-fisted enterprise IT environment, and even then...).


6) As far as I can tell, this mechanism only allows access control on the granularity of an entire referring site; e.g., if I allow example.com to access a particular resource, *any* reference from example.com is allowed to access it.

If that's the case, this limitation should be explicitly mentioned, and the spec should highlight the security implications of allowing multi-user hosts (e.g., HTML mail sites, picture sharing sites, social networking sites, "mashup" sites) to refer to your data.

Also, section 4.1 contains "http://example.org/example"; as a sample access item; at best this is misleading, and it doesn't appear to be allowed by the syntax either.

That's all for now,

Multi-user hosts already need filtering. Otherwise they could simply load a page from the same domain with a different path in an <iframe> or something and do the request from there. The security model of the Web is based around domains. How unfortunate or fortunate that may be.

Yes; it's still worth pointing this out for the uninitiated.


--
Mark Nottingham       [EMAIL PROTECTED]



Reply via email to