Mark Baker wrote:
On 2/20/08, Anne van Kesteren <[EMAIL PROTECTED]> wrote:
On Wed, 20 Feb 2008 15:15:39 +0100, Mark Baker <[EMAIL PROTECTED]> wrote:
Your premise seems to be that in the future, the community might rally
around and widely deploy, brain-dead extensions which attempt to
violate the fundamental semantics of HTTP, in this case the safety of
GET messages.  IMO, that's not a realistic concern.
I'm not talking about communities, or braind-dead extensions. I'm talking
about the theoretical possibility that this might already be deployed on
some servers around the world (or something of equivalent nature) and that
therefore allowing such cross-domain GET requests with custom headers
introduces a new attack vector. And introducing a new attack vector is
something we should avoid, regardless of whether being vulnerable to that
attack vector relies on violating the fundamental semantics of HTTP.

It's not a new attack vector, because I can already use curl to send a
GET message which causes the harm you're worried about.  AFAICT, all
that changes in a cross-site scenario is that the attacker uses the
client as an anonymizer, something that can already be done with open
proxies (of various flavours).  Is that worth crippling the spec in
such a fundamental way?  Not IMO.

When you use curl you will not be able to include the auth headers or cookies of other users. You will also not be able to use curl to connect to websites inside firewalls.

Also, I have no pity for any Web admin who suffers harm as a direct
result of permitting badly designed Web apps to be deployed on their
servers.

I guess that is where we are different. I try to protect the people that are currently deploying websites. As best I can. Not just the people that perfectly follow all specs and know all the latest and greatest security recommendations.

/ Jonas

Reply via email to