FYI, below is the start of a thread by Tyler in the rest-discuss YahooGroup regarding the AC4CSR spec. The message is also available via:

 <http://tech.groups.yahoo.com/group/rest-discuss/message/10330>

Regards, Art Barstow
---


Begin forwarded message:

From: [EMAIL PROTECTED] [mailto:rest- [EMAIL PROTECTED] On Behalf Of Tyler Close
Sent: Monday, January 28, 2008 12:21 PM
To: Rest List
Subject: [rest-discuss] W3C Working Group on Cross-Domain requests needs your feedback

Hi all,

The Web Applications Format Working Group is currently finishing up
design work on a specification for Cross-Domain requests in the
browser that could have a significant impact on the design of future
web applications. I believe the WG's current proposal puts REST
designs at a significant disadvantage. The Working Group will be
taking up this issue in its weekly teleconference this Wednesday and I
hope feedback from the members of this mailing list might help
communicate the importance of this issue for web developers.

Currently, the Same Origin Policy limits the ways in which a web page
served by one site, Site A, can communicate with resources hosted on a
separate site, Site B. The WG aims to provide a way for Site B and the
browser to agree on a loosening of these restrictions. For example,
given the consent of Site B, the browser would allow a page from Site
A to receive the response to a GET request on a resource hosted by
Site B. Similarly, the page from Site A would be allowed to send an
arbitrary POST to a resource hosted by Site B and receive the
response.

Essentially, the WG's current proposal is for the server to express a
cross-domain request policy (XDRP) for each hosted resource. Before
sending a cross-domain, non-GET request to a particular resource, the
web browser must query that resource's XDRP and enforce any expressed
constraints. For more detail, please refer to the the WG's current
draft proposal <http://dev.w3.org/2006/waf/access-control/>.

One of the problems with this design is that there is a mandatory
network round-trip to fetch the XDRP for a resource before the browser
can begin communicating with that resource. Essentially, there's a one
network round-trip penalty for each distinct URI a web application
uses. For example, consider the effect of this design on a mashup that
uses the ATOM protocol to communicate with another site. When creating
a new member resource, the server sends the client a distinct URI
identifying the resource. In essence, the server is saying: "Send your
updates here". But before sending a PUT request to the specified URL,
the mashup application must first send a request for the XDRP, in
effect asking permission to use the resource it was just told to use.
The same happens for each distinct URL the client uses. The result is
a tragically comical network protocol in which the client repeatedly
asks permission to do what the server just told the client it could
do.

I've pointed out this problem to the WG and the response from one of
the main contributors to the specification was:

Ian Hickson wrote:
> I do not believe that we should change the API to optimise for the one
> case of an API that involves a lot of non-GET requests to unique
> resources. It is trivial to optimise the server's protocol in
> this case
> anyway (just use one URI and put the parameters in the body), and this
> more closely matches what most people will be doing anyway.

See: <http://lists.w3.org/Archives/Public/public-appformats/2008Jan/ 0297.html>

Members of this mailing list may see a similarity between this
proposed design advice and the long-ago discussions about the design
of SOAP where requests are all sent to a single URL and the actual
target resource is identified by arguments in the message body.

I also pointed out that this design advice discourages use of URIs,
whereas webarch encourages use of URIs. The reply was *only*:

Ian Hickson wrote:
> I disagree with much of Web Arch.

See: <http://lists.w3.org/Archives/Public/public-appformats/2008Jan/ 0299.html>

Going forward, it seems likely that an important measure of the design
quality of a web application will be how well it works in a mashup.
Many designs that a REST proponent may favour involve heavy use of
URIs. Under the WG's current proposal, these designs will be penalized
with a network round-trip for each distinct URI. The proposal's main
designer dismisses this design style. The WG's editor sees this issue
as only a "quibble":

See: <http://lists.w3.org/Archives/Public/public-appformats/2008Jan/ 0306.html>

Mark Nottingham and I have proposed an alternate design for
cross-domain requests that does not have these performance problems
and negative design pressures. Essentially, the alternate proposal
asks the host, instead of each resource, if it is willing to accept
cross-domain requests. If so, these requests are then so labeled, in
the same way as the WG currently proposes. The task of enforcing a
per-resource policy is then managed solely by the host. Under such a
design, there is one network round-trip before requests begin to flow,
but it is one round-trip per host, not per resource. This way there is
no penalty for heavy use of URIs. To date, the WG has rejected
pursuing such a design.

This design discussion is also taking place under the time pressure of
the Firefox 3 release, which plans to implement the WG's current
proposal. Should this release happen, this topic may be a done deal.
Jonas Sicking seems to be the Mozilla representative participating on
this WG.

In my opinion, the WG's current proposal would have a negative impact
on the viability of REST design on the Web. I think members of this
mailing list should consider these issues and make their voices heard
on the WG's mailing list for public feedback. Before posting to this
list, you must first subscribe by sending an email to:

mailto:[EMAIL PROTECTED]

Hopefully, significant feedback from this community will positively
influence the discussion of these issues in Wednesday's telecon.

--Tyler



Reply via email to