Re: [whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-14 Thread Anne van Kesteren

On Sun, 14 Mar 2010 02:45:26 +0100, Brett Zamir  wrote:

Servers are already free to obtain and mix in content from other
sites, so why can't client-side HTML JavaScript be similarly empowered?


Because you would also have access to e.g. IP-authenticated servers.


As suggested above, could a header be required on compliant browsers to
send a header along with their request indicating the originating
server's domain?


No, existing servers would still be vulnerable.


--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-13 Thread Michal Zalewski
> As suggested above, could a header be required on compliant browsers to send
> a header along with their request indicating the originating server's
> domain?

Yes, but it's generally a bad practice to release new features that
undermine the security of existing systems, and requiring everybody to
change their code to account for the newly introduced vectors.

Theoretically, GET or OPTIONS should have no side effects, so DoS
potential aside, they could be permitted with no special security
checks. In practice, much of the Internet uses GET for state-changing
actions; or nominally uses POSTs, but does not differentiate between
the two in any specific way; plus, the problem of IP auth / Intranet
probing remains.

Bottom line is, opt-in is offered in several other places; and opt-out
solution seems unlikely at this point, I would think?

/mz


Re: [whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-13 Thread Brett Zamir

On 3/12/2010 3:41 PM, Anne van Kesteren wrote:
On Fri, 12 Mar 2010 08:35:48 +0100, Brett Zamir  
wrote:
My apologies if this has been covered before, or if my asking this is 
a bit dense, but I don't understand why there are restrictions on 
obtaining data via XMLHttpRequest from other domains, if the request 
could be sandboxed to avoid passing along sensitive user data like 
cookies (or if the user could be asked for permission, as when 
installing browser extensions that offer similar privileges).


Did you see

  http://dev.w3.org/2006/webapi/XMLHttpRequest-2/
  http://dev.w3.org/2006/waf/access-control/

?


I have now, thanks. :)  Though I regrettably don't have a lot of time 
now to study it as deeply as I'd like (nor Michal Zalewski's reference 
to UMP), and I can't speak to the technical challenges of browsers (and 
their plug-ins) implementing the type of sandboxing that would be 
necessary for this if they don't already, I was just hoping I could 
articulate interest in finding a way to overcome if possible, and 
question whether the security challenges could be worked around at least 
in a subset of cases.


While I can appreciate such goals as trying "to prevent 
dictionary-based, distributed, brute-force attacks that try to get login 
accounts to 3^rd party servers" mentioned in the CORS spec and 
preventing spam or opening accounts on behalf of users and the like, I 
would think that at least GET/HEAD/OPTIONS requests should not be quite 
as important an issue.


As far as the issue Michal brought up about the client's IP being sent, 
I might think this problem could be mitigated by a client header being 
added to indicate the domain of origin behind the request. It's hard to 
lay the blame on the client for a DoS if it is known which server was 
initiating. (Maybe this raises some privacy issues, as the system would 
make known who was visiting the initiating site, but I'd think A) this 
info could be forged anyways, and B) any site could publish its visitors 
anyways.) I'll admit this might make things more interesting legally 
though, e.g., whether the client shared some or all responsibility, for 
DoS or copyright violations, especially if interface interaction 
controlled the number of requests. But as far the burden on the user, if 
the user is annoyed that their browser is being slowed as a result of 
requests made on their behalf (though I'm not sure how much work it 
would save given that the server still has to maintain a connection), 
they can close the tab/window, or maybe the browser could offer to 
selectively disable such requests or request permission.


I would think that the ability for clients to help a server crawl the 
internet might even potentially be a feature rather than a bug, allowing 
a different kind of proxy opportunity for server hosts which are in 
countries with blocked access. Besides this kind of "reverse proxy" (to 
alter the phrase), I wouldn't think it would be that compelling for 
sites to outsource their crawling (except maybe as a very insecure and 
unpredictably accessible backup or caching service!), since they'd have 
to retrieve the information anyways, but again I can't see what harm 
there would really be in it, except that addressing DoS plans would need 
to address an additional header.


I apologize for not being able to research this more carefully, but I 
was just hoping to see if there might be some way to allow at least a 
safer subset of requests like GET and HEAD by default. Akin to the 
rationales behind my proposal for browser support of client-side XQuery, 
including as a content type (at 
http://brett-zamir.me/webmets/index.php?title=DrumbeatDescription ), it 
seems to me that users could really benefit from such capacity in 
client-side JavaScript, not only for the sake of greater developer 
options, but also for encouraging greater experimentation of mash-ups, 
as the mash-up server is not taxed with having to obtain the data 
sources (nor tempted to store stale copies of the source data nor 
perhaps be as concerned with the need to obtain republishing permissions).


Servers are already free to obtain and mix in content from other 
sites, so why can't client-side HTML JavaScript be similarly empowered?


Because you would also have access to e.g. IP-authenticated servers.



As suggested above, could a header be required on compliant browsers to 
send a header along with their request indicating the originating 
server's domain?


best wishes,
Brett



Re: [whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-12 Thread Ashley Sheridan
On Thu, 2010-03-11 at 23:50 -0800, Michal Zalewski wrote:

> > Servers are already free to obtain and mix in content from other sites, so
> > why can't client-side HTML JavaScript be similarly empowered?
> 
> I can see two reasons:
> 
> 1) Users may not be happy about the ability for web applications to
> implement an unprecedented level of automation through their client
> (and using their IP) - for example, crawling the Intranet, opening new
> accounts on social sites and webmail systems, sending out spam.
> 
> While there is always some ability for JS to blindly interact with
> third-party content, meaningful automation typically requires the
> ability to see responses, read back XSRF tokens, etc; and while
> servers may be used as SOP proxies, the origin of these requests is
> that specific server, rather than an assortment of non-consenting
> clients.
> 
> The solution you propose - opt-out - kinda disregards status quo, and
> requires millions of websites to immediately deploy workarounds, or
> face additional exposure to attacks. For opt-in, you may want to look
> at UMP: http://www.w3.org/TR/2010/WD-UMP-20100126/ (or CORS, if you do
> not specifically want anonymous requests).
> 
> 2) It was probably fairly difficult to "sandbox" requests fully so
> that they are not only stripped of cookies and cached HTTP
> authentication, but also completely bypass caching mechanisms
> (although UMP aims to achieve this).
> 
> /mz


Potentially you're entering a whole world of problems. Not only would
all the browsers have to sandbox, but every single plugin that a browser
uses. Think of the way Flash has it's own method of storing potentially
sensitive cookie-like data on the clients machine, which the browser has
no control of. You're looking at a massive task just there.

Thanks,
Ash
http://www.ashleysheridan.co.uk




Re: [whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-11 Thread Michal Zalewski
> Servers are already free to obtain and mix in content from other sites, so
> why can't client-side HTML JavaScript be similarly empowered?

I can see two reasons:

1) Users may not be happy about the ability for web applications to
implement an unprecedented level of automation through their client
(and using their IP) - for example, crawling the Intranet, opening new
accounts on social sites and webmail systems, sending out spam.

While there is always some ability for JS to blindly interact with
third-party content, meaningful automation typically requires the
ability to see responses, read back XSRF tokens, etc; and while
servers may be used as SOP proxies, the origin of these requests is
that specific server, rather than an assortment of non-consenting
clients.

The solution you propose - opt-out - kinda disregards status quo, and
requires millions of websites to immediately deploy workarounds, or
face additional exposure to attacks. For opt-in, you may want to look
at UMP: http://www.w3.org/TR/2010/WD-UMP-20100126/ (or CORS, if you do
not specifically want anonymous requests).

2) It was probably fairly difficult to "sandbox" requests fully so
that they are not only stripped of cookies and cached HTTP
authentication, but also completely bypass caching mechanisms
(although UMP aims to achieve this).

/mz


Re: [whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-11 Thread Anne van Kesteren

On Fri, 12 Mar 2010 08:35:48 +0100, Brett Zamir  wrote:
My apologies if this has been covered before, or if my asking this is a  
bit dense, but I don't understand why there are restrictions on  
obtaining data via XMLHttpRequest from other domains, if the request  
could be sandboxed to avoid passing along sensitive user data like  
cookies (or if the user could be asked for permission, as when  
installing browser extensions that offer similar privileges).


Did you see

  http://dev.w3.org/2006/webapi/XMLHttpRequest-2/
  http://dev.w3.org/2006/waf/access-control/

?


Servers are already free to obtain and mix in content from other sites,  
so why can't client-side HTML JavaScript be similarly empowered?


Because you would also have access to e.g. IP-authenticated servers.


--
Anne van Kesteren
http://annevankesteren.nl/


[whatwg] Lifting cross-origin XMLHttpRequest restrictions?

2010-03-11 Thread Brett Zamir

Hi,

My apologies if this has been covered before, or if my asking this is a 
bit dense, but I don't understand why there are restrictions on 
obtaining data via XMLHttpRequest from other domains, if the request 
could be sandboxed to avoid passing along sensitive user data like 
cookies (or if the user could be asked for permission, as when 
installing browser extensions that offer similar privileges).


Servers are already free to obtain and mix in content from other sites, 
so why can't client-side HTML JavaScript be similarly empowered?


If the concern is simply to give servers more control and avoid Denial 
of Service effects, why not at least make the blocking opt in (like 
robots.txt)? There are a great many uses for being able to mash up data 
from other sites, including from the client, and it seems to me to be 
unnecessarily restrictive to require explicit permissions. Despite my 
suggesting opt-in blocking as an alternative, I wouldn't even think 
there should be this option at all, since servers are technically free 
to grab such content unhindered, and everyone I believe should have the 
freedom and convenience to be able to design and enjoy applications 
which "just work"--mixing from other pages without extra effort, unless 
they are legally prohibited from doing so.


If the concern is copyright infringement, the same concern holds true 
for servers which can already obtain such content unrestricted, and I do 
not believe overly cautious preemptive policing is a valid pretext for 
constraining technology and its opportunities for sites and users.


thanks,
Brett