Per ACTION-154, I'm supposed to elaborate on possible proposals for a "server-side" enforcement point. Much of what is in this message is based on earlier material from Mark Nottingham, Tyler Close, and Doug Crockford.

(This is to get some more clarity on ISSUE-20, Client and Server
model.)

There are two basic cases that we are concerned with: GET and not
GET.  In the case of GET, the goal is to control the information
flow from the data source to a Web application running from another
origin.  In the case of other requests, the goal is to put controls
on the control flow from the web app to a server-side application
running from another origin, and on the information flow back.


For GET, we're assuming that whatever other technology is "hosting"
the "access-control" mechanism imposes a same-origin-like
restriction on the data flow, and we assume that this restriction is
part of the design assumptions that existing Web applications make.
(In fact, this restriction is a critical part of current defense
techniques against XSRF.)

For non-GET methods, we're assuming that whatever other technology
is "hosting" the "access-control" mechanism imposses a same-origin
like restriction on applications' ability to send non-GET requests
over the network.  We assume that it's worthwhile to protect
server-side applications against unexpected cross-origin requests of
this kind.

In other words, if a server doesn't know about new cross-origin
authorization mechanisms, then its environment shouldn't be changed
by whatever mechanism we propose.

Here are some design sketches:

- Discover whether the server knows of cross-site request
 authorization mechanisms, through...

 * OPTIONS, [3] or
 * a metadata file at a well-known location (P3P-like)

 If the server is found to support the mechanism, use GET and/or
 POST with Referer-Root for cross-site requests, and let the server
 figure out whether to serve data, as Mark had sketched in [1]. For
 this scheme to work properly with HTTP caches, the server must set
 an appropriate Vary header on responses to requests that can be
 cached (GET), and the cache must know how to deal with it.

 In this model, the policy is never shared with the client, and
 remains a local affair on the server.

 The model does require the server to have a local convention for
 policy authoring and an engine to interpret these policies.

 Using metadata stored in a well-known location will reduce the
 per-request overhead.

- Design cross-site requests so legacy servers won't do anything
 interesting when they are hit with them.  Whatever information is
 required by the target host is then sent along with the cross-site
 requests.
Possibilities include: * use a strange content-type for POST and for responses, and don't
   include any "ambient" authentication information; JSONRequest
   takes this approach; [2]
 * use new HTTP methods (CSGET, CSPOST, ...)
For the server side, same as above. In this model, no policy is shared with the client, and there is
 no overhead in terms of discovering what the server is capable of.

- Explicitly ask the server for authorization.  Tyler proposed a
 model like this in [4], using a well-known location like design
 pattern.  Using OPTIONS with a Referer-Root header is another
 possibility to the same end.

 Once more, the policy doesn't need to be shared with the client,
 and the complexity is isolated to the server side.


One point in common to almost all of these models is that there is
some rudimentary enforcement going on on the client side: The client
learns about the server's abilities or decisions, and will then
either stick to its old same-origin policy, or not.

In the "use new HTTP methods" model, that enforcement is replaced by
the client sending a distinct kind of requests.


The real distinction (and the decision that this group needs to make
and document!) between these models and the one that is in the
current spec is where the policy is *evaluated* - either, that
happens on the client (and there needs to be an agreed policy
specification, which is what this document started out being).  Or,
it happens on the server, in which case policy authoring is a purely
server-local affair.


In this context, it's worth noting (Hixie pointed, e.g., in [5]),
that it is possible to deploy the currently spec'ed technique in a
way that mostly imitates the "server-side" model: just send "allow
*" (and appropriate Vary headers), and leave the rest to the server.


I'd suggest that, as we go forward with this issue, people start
elaborating on the benefits (and downsides) of the various models,
compared to what's currently in the spec, if possible in terms of
the use cases and requirements that we have now.

Also, if you think there are additional use cases and requirements
that are missing, it's probably worth calling these out, explicitly.


1. http://lists.w3.org/Archives/Public/public-appformats/2008Jan/0118.html
2. http://www.json.org/JSONRequest.html
3. http://www.w3.org/mid/[EMAIL PROTECTED]
4. http://www.w3.org/mid/[EMAIL PROTECTED]
5. http://lists.w3.org/Archives/Public/public-appformats/2008Jan/0186.html

--
Thomas Roessler, W3C   <[EMAIL PROTECTED]>
Per ACTION-154, I'm supposed to elaborate on possible proposals for a "server-side" enforcement point. Much of what is in this message is based on earlier material from Mark Nottingham, Tyler Close, and Doug Crockford.


I'd like to start out with the general assumptions that I'll make:

1. There is some surrounding spec that currently implements a same origin policy with the properties that (a) requests different from GET can only be sent to same-origin URIs, and (b) prevents whatever capabilities this specification makes available from acting upon information that might be retrieved from non-same-origin URIs (through GET, presumably).

2. There are Web applications out there that rely in one way or the other on the properties of that same-origin policy.  We consider these Web applications to be worth our protection, i.e., existing Web Applications should not be exposed to POST (and other non-GET) requests from a different origin, unless there is an explicit opt-in to that. Also, information retrieved from GET should only be communicated to applications from other origins if there's an explicit opt-in.


In other words, we assume that the *current* environment is one in which the server side does not know anything about cross-site requests, and in which the client side prevents these requests from happening.  Policy decisions are fully made on the client side, even though that policy is very simple ("no").

If we want to authorize certain requests (and assume that there is a design principle to move as much as possible over to the service side), then either of two things needs to happen in order to fulfill requirement 2:

1. Cross-site requests are sent in a way that simply causes legacy applications to reject the request.  Design choices include strange content types (JSONRequest) or different HTTP methods.

2. The client switches 

-- 
Thomas Roessler, W3C   <[EMAIL PROTECTED]>




Reply via email to