> Servers are already free to obtain and mix in content from other sites, so > why can't client-side HTML JavaScript be similarly empowered?
I can see two reasons: 1) Users may not be happy about the ability for web applications to implement an unprecedented level of automation through their client (and using their IP) - for example, crawling the Intranet, opening new accounts on social sites and webmail systems, sending out spam. While there is always some ability for JS to blindly interact with third-party content, meaningful automation typically requires the ability to see responses, read back XSRF tokens, etc; and while servers may be used as SOP proxies, the origin of these requests is that specific server, rather than an assortment of non-consenting clients. The solution you propose - opt-out - kinda disregards status quo, and requires millions of websites to immediately deploy workarounds, or face additional exposure to attacks. For opt-in, you may want to look at UMP: http://www.w3.org/TR/2010/WD-UMP-20100126/ (or CORS, if you do not specifically want anonymous requests). 2) It was probably fairly difficult to "sandbox" requests fully so that they are not only stripped of cookies and cached HTTP authentication, but also completely bypass caching mechanisms (although UMP aims to achieve this). /mz