Hi Ian,

Ian Hickson wrote:
> On Mon, 31 Dec 2007, Close, Tyler J. wrote:
> >
> > 1. Browser detects a cross-domain request
> > 2. Browser sends GET request to /1234567890/Referer-Root
> > 3. If server responds with a 200:
> >     - let through the cross-domain request, but include a
> Referer-Root
> > header. The value of the Referer-Root header is the relative URL /,
> > resolved against the URL of the page that produced the request. HTTP
> > caching mechanisms should be used on the GET request of step 2.
> > 4. If the server does not respond with a 200, reject the
> cross-domain
> > request.
>
> This is a very dangerous design. It requires authors to be able to
> guarentee that every resource across their entire server is capable of
> handling cross-domain requests safely. Security features with the
> potential damage of cross-site attacks need to default to a
> safe state on
> a per-resource basis, IMHO.

Sure, but the question is: "Who's responsibility is it?". In my opinion, it is 
the server's responsibility to ensure a safe default for each resource. You 
seem to have the perspective that it's the client's responsibility.

With the "OPTIONS *" request, or "GET /special/URL" request, I am just trying 
to establish that the client and server know what the other is saying. We can 
then leave each to protect its own interests. This division of labour requires 
the least amount of coordination between client and server and puts the 
enforcement task with the party who most wants a policy enforced. In my 
opinion, these are the two high order bits when judging a solution to this 
problem.

By giving the thumbs up to the client's query about understanding of the 
Referer-Root header, the server is saying: "Yes, I've got a filter in place to 
control access to individual resources. Go ahead and send the requests through, 
but label them with their originator information. I've got it covered from 
there." I imagine the sysadmin for the server would setup this filtering and 
then provide some guidelines to web page authors on how to activate 
cross-domain requests. These guidelines may very well look much like the ones 
this WG has designed. The difference is that they are an agreement between web 
page authors and their own server, not between web page authors and all client 
software for the Web. The former means Web content publishers can develop their 
own policy and enforcement mechanisms, and have confidence in how they are 
being used, regardless of which client software any user may have.

We can twiddle with the details of how client and server establish that both 
understand the meaning of the new Referer-Root header. I've offered 2 
possibilities: one that fits web-arch better, and one that fits the WG's 
self-imposed rule of no modifications to server software. I only offered these 
possibilities to show that the problem can be solved in this way. If the WG 
really needs me to design the exact details of this handshake, I'll do so, but 
I imagine WG members are equally capable to this task. I suspect this handshake 
can also be designed so as not to require modification of the server's 
software. I don't think this is a high priority, but I undertand that this WG 
does. For example,

> Furthermore, relying on a 200 OK response would immediately
> expose all the
> sites that return 200 OK for all URIs to cross-site-scripting attacks.

Fine, so have the server respond with something unmistakeable. In the extreme, 
the server could be required to respond with a text entity containing a large 
random number whose value is specified in the rec.

> (There is also the problem that any fixed URI has -- /robots.txt
> /w3c/p3p.xml, etc are all considered very bad design from a
> URI point of
> view, as they require an entire domain to always be under the
> control of
> the same person, whereas here we might well have cases where
> partitions of
> a domain are under the control of particular users, with the different
> partitions having different policies.)

In which case it is only necessary that whoever has control of the special URL 
has coordinated with the other users of the host. These are all arrangements 
that can be made server side, without exposing the details to clients.

Again, I think the "OPTIONS *" request is the more acceptable design, since it 
doesn't use a well-known URL, but I also don't think this issue is that big a 
deal.

> Furthermore, there is a desire for a design that can be applied purely
> static data where the user has no server-side control whatsoever. With
> your proposal, even publishing a single text file or XML file
> with some
> data would require scripting, which seems like a large onus to put on
> authors who are quite likely inexperienced in security matters.

Again, this is server-side setup. Particular servers may well choose to deploy 
technology much like what this WG has created. We just don't have to say that 
everyone has to do it that way. We don't need that broad an agreement. These 
technology choices can be confined to the server-side. We only need a way for 
client and server to signal the presense of such a mechanism, in particular, 
declaring that each understands the meaning of the Referer-Root header.

--Tyler

Reply via email to