Re: [whatwg] Clickjacking and CSRF

2009-07-22 Thread Bil Corry
Aryeh Gregor wrote on 7/21/2009 5:34 PM: 
 If we could do reports only, then we would probably publish the data
 live in some form, yes.

If it's desirable to add a 'report only' feature to CSP, I'd prefer see a 
second CSP-related header (X-Content-Security-Policy-ReportOnly???) that 
implements it rather than adding it to the CSP header.  The presence of both 
headers (CSP and CSPReportOnly) would mean both would be acted upon.

There's already been some discussion that authors would iteratively relax CSP 
until their site worked.  I can see where an author enables ReportOnly, their 
site suddenly works and they mistakenly believe it's properly configured and 
actively protecting their site.


- Bil





Re: [whatwg] Clickjacking and CSRF

2009-07-22 Thread Aryeh Gregor
On Wed, Jul 22, 2009 at 1:20 PM, Bil Corryb...@corry.biz wrote:
 If it's desirable to add a 'report only' feature to CSP, I'd prefer see a 
 second CSP-related header (X-Content-Security-Policy-ReportOnly???) that 
 implements it rather than adding it to the CSP header.  The presence of both 
 headers (CSP and CSPReportOnly) would mean both would be acted upon.

I can't see how that makes a difference either way for any purpose,
really.  It just seems like it would make it slightly more annoying
for authors to deploy, and somewhat more confusing (since the presence
of one header would drastically change the semantics of another).

 There's already been some discussion that authors would iteratively relax CSP 
 until their site worked.  I can see where an author enables ReportOnly, their 
 site suddenly works and they mistakenly believe it's properly configured and 
 actively protecting their site.

They might also make a typo in the policy file that causes Firefox to
ignore the whole thing, and mistakenly believe they're being
protected.  Or they might enable CSP, then allow inline script and
import from arbitrary foreign sites because that's what it took for
their ads and Analytics to start working again, and think they're
protected.

You can't really do much to stop people from having a sense of false
security if they neither understand nor test their security system.  I
don't think it's valuable to try.


Re: [whatwg] Clickjacking and CSRF

2009-07-22 Thread Bil Corry
Aryeh Gregor wrote on 7/22/2009 12:38 PM: 
 On Wed, Jul 22, 2009 at 1:20 PM, Bil Corryb...@corry.biz wrote:
 If it's desirable to add a 'report only' feature to CSP, I'd prefer see a 
 second CSP-related header (X-Content-Security-Policy-ReportOnly???) that 
 implements it rather than adding it to the CSP header.  The presence of both 
 headers (CSP and CSPReportOnly) would mean both would be acted upon.
 
 I can't see how that makes a difference either way for any purpose,
 really.  It just seems like it would make it slightly more annoying
 for authors to deploy, and somewhat more confusing (since the presence
 of one header would drastically change the semantics of another).

The idea here is 'when in doubt, favor the more restrictive option.'  There 
shouldn't be both headers, but if there are, then CSP wins.


 There's already been some discussion that authors would iteratively relax 
 CSP until their site worked.  I can see where an author enables ReportOnly, 
 their site suddenly works and they mistakenly believe it's properly 
 configured and actively protecting their site.
 
 They might also make a typo in the policy file that causes Firefox to
 ignore the whole thing, and mistakenly believe they're being
 protected.

This won't happen as CSP explicitly enforces a 'fail closed' policy:

https://wiki.mozilla.org/Security/CSP/Spec#Handling_Parse_Errors


 Or they might enable CSP, then allow inline script and
 import from arbitrary foreign sites because that's what it took for
 their ads and Analytics to start working again, and think they're
 protected.

Allowing content from their advertising and analytics providers is far less 
serious than mistakenly turning on ReportOnly which allows content from any 
source.

 
 You can't really do much to stop people from having a sense of false
 security if they neither understand nor test their security system.  I
 don't think it's valuable to try.

It's valuable to set them up for as much success as possible.


- Bil





Re: [whatwg] Clickjacking and CSRF

2009-07-22 Thread Aryeh Gregor
On Wed, Jul 22, 2009 at 1:56 PM, Bil Corryb...@corry.biz wrote:
 The idea here is 'when in doubt, favor the more restrictive option.'  There 
 shouldn't be both headers, but if there are, then CSP wins.

Ah, I see, you'd only send one header.  Well, it still seems like it
might be a little more confusing to have essential data split across
multiple places (e.g., policy file vs. header name).

 It's valuable to set them up for as much success as possible.

It's a detail that I don't think is really a big deal in any event, so
I have no strong opinion.  I do think that some report-only mode would
be almost essential for safe deployment in complicated preexisting
apps.


Re: [whatwg] Clickjacking and CSRF

2009-07-22 Thread Bil Corry
Aryeh Gregor wrote on 7/22/2009 5:47 PM: 
 On Wed, Jul 22, 2009 at 1:56 PM, Bil Corryb...@corry.biz wrote:
 The idea here is 'when in doubt, favor the more restrictive option.'  There 
 shouldn't be both headers, but if there are, then CSP wins.
 
 Ah, I see, you'd only send one header.  Well, it still seems like it
 might be a little more confusing to have essential data split across
 multiple places (e.g., policy file vs. header name).

To clarify, I was thinking this would run CSP in report-only mode:

X-Content-Security-Policy-ReportOnly: allow self

Then when you're satisfied with the ruleset, you merely rename the header to 
actually kick it on:

X-Content-Security-Policy: allow self



- Bil



Re: [whatwg] Clickjacking and CSRF

2009-07-21 Thread Aryeh Gregor
I'm CCing wikitech-l here for broader input, since I do think
Wikipedia would be interested in adopting this but I can't really
speak for Wikipedia myself.  The history of this discussion can be
found in the archives:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-July/021133.html

I think both whatwg and wikitech-l are configured to bounce messages
by unregistered users.  For wikitech-l members who want to comment,
the registration link for whatwg is:

http://lists.whatwg.org/listinfo.cgi/whatwg-whatwg.org

I'd suggest the discussion be continued on whatwg and people not post
replies to wikitech-l, to avoid confusion.

On Tue, Jul 21, 2009 at 6:34 PM, Brandon Sternebste...@mozilla.com wrote:
 I have two competing instincts in response to this proposal.  In
 general, I am opposed to adding a mechanism which effectively
 disables all the security protections offered by CSP.

Why?  It would most likely only be enabled temporarily.  Even if it is
enabled permanently (like all those sites with eternal SPF soft fail .
. .), it would be no worse than no CSP at all.

 Would it be possible for Wikipedia to leverage its community to help
 convert pages to support CSP?

Sure, but we have to have a list of things that are broken first.  I
think it would be much easier to get such a list if we could get it
from the clients themselves.  Then we would be enabling a totally
harmless feature (reporting only), verifying that no more errors are
being generated, and then enabling another feature that's already been
verified to not cause a significant number of errors.  We could
confidently enable reporting as soon as the first Firefox dev builds
shipped with CSP support.  I would be happy to personally commit that.
 Then we could enable real protection as soon as the reports got down
to an acceptable level.

If we had to just hope we had caught everything important and wouldn't
break anything, I think deployment would have to be a *lot* more
cautious.  The only thing we'd know is that we'd break lots of stuff,
but we wouldn't know what.  I certainly would never commit any code
like that without approval from the sysadmins, and I strongly suspect
they'd be hesitant to grant it.  It's not that the site would go down
or anything, but some scripts a lot of people are relying on would
probably break; and worse, we'd have lots of little separate
complaints from lots of different people going on for a long time as
more minor broken scripts get noticed.

I might be overestimating the potential risk here -- Wikipedia doesn't
really depend on JavaScript for almost anything -- but I'm *quite*
sure that it would greatly simplify deployment if we could be sure we
wouldn't be breaking anything.  It's not my decision, though.
Hopefully some higher-ups can comment here.

 It seems, based on the above, that you'll
 need to serve users custom CSP policy based on which scripts they have
 enabled, so it will probably be necessary to distribute the testing of
 CSP across the community.  Is it reasonable to expect the site admins to
 process all of the CSP violation reports on behalf of all the users?
 Wouldn't it be more scalable to have community members fixing the CSP
 violations and in the process have users protected from true XSS attacks?

If we could do reports only, then we would probably publish the data
live in some form, yes.  Then community members could fix it to the
extent possible.  Community admins could fix the problems on the
wikis, and developers could fix the problems in the software.  But we
need a list of what to fix.  The software is hundreds of thousands of
lines, and we have an enormous amount of user JavaScript:

mysql SELECT SUM(page_len) FROM page WHERE page_namespace=2 AND
page_title LIKE '%.js';
+---+
| SUM(page_len) |
+---+
| 103877387 |
+---+
1 row in set (6.61 sec)

That's almost 100 MB of (presumptive) user JavaScript on the English
Wikipedia alone.  We can't possibly review all of that thoroughly
enough to be reasonably certain in advance that we won't get
significant breakage.

 I am not totally opposed to your proposal, though I would like to
 exhaust other possibilities before we defang CSP to such an extent.

I don't understand why this would be defanging anything.  It would be
entirely optional.  Am I missing something?


Re: [whatwg] Clickjacking and CSRF

2009-07-17 Thread Aryeh Gregor
On Fri, Jul 17, 2009 at 6:21 PM, Brandon Sternebste...@mozilla.com wrote:
 No, that feature is not part of the current design, though nothing is
 set in stone.  Couldn't you achieve the same effect (verifying your
 policy isn't blocking wanted things) by simply testing the pages in a
 CSP-supporting browser and watching for violations (in the client or on
 the report-uri server)?

Only if I visit every single page in the whole complicated app that
users might visit.  With all their settings.  Which might or might not
be practical.

On Wikipedia, for instance, users are allowed to specify custom
scripts to load, kind of like server-side Greasemonkey.  That will
certainly break if we prohibit external script loads, but we can't
really assess the extent of the problem in the current spec without
actually enabling the feature and (possibly) breaking everything.
(How we'd handle this at all, I'm not sure.  We might need to allow
registered users to opt out, or use a more lenient policy.  But that's
a separate issue.)

More generally, it would be a pain to do a full audit of the code and
all extensions to find all instances of inline script, and all the
other assorted possible violations that might be occurring.  We could
do it with a test browser, yes, but we'd only catch the most common
cases that way.  The code running on Wikipedia is somewhere over
700,000 LOC, it wouldn't be trivial at all to manually find all the
places where violations could possibly be generated.

I think the ability to have violations reported without actually
preventing them would be very useful to ease deployment in existing
apps.


Re: [whatwg] Clickjacking and CSRF

2009-07-16 Thread Charles McCathieNevile
On Thu, 16 Jul 2009 03:48:41 +0200, Aryeh Gregor  
simetrical+...@gmail.com wrote:



On Wed, Jul 15, 2009 at 9:24 PM, Jonas Sickingjo...@sicking.cc wrote:

Note that Content Security Policies[1] can be used to deal with
clickjacking. So far we've gotten a lot of positive feedback to CSP
and are in progress of implementing it in firefox. So it's a possible
solution to this.


Is Mozilla planning to run CSP through a usual standards body like the
W3C, either before or after implementation?  If you plan to
standardize it after implementation, why not before instead?  CSP
looks really exciting, but I'm not clear on whether or when it will be
standardized -- I've heard talk of implementing it, but not of
standardizing it.


Opera has been actively following up this problem with various browser  
vendors (in particular) in the hopes of at least getting us all together  
in a useful forum. If you're curious, Sigbjørn is our lead for this effort.


cheers

Chaals

--
Charles McCathieNevile  Opera Software, Standards Group
je parle français -- hablo español -- jeg lærer norsk
http://my.opera.com/chaals   Try Opera: http://www.opera.com


Re: [whatwg] Clickjacking and CSRF

2009-07-16 Thread Jonas Sicking
On Wed, Jul 15, 2009 at 6:48 PM, Aryeh Gregorsimetrical+...@gmail.com wrote:
 On Wed, Jul 15, 2009 at 9:24 PM, Jonas Sickingjo...@sicking.cc wrote:
 Note that Content Security Policies[1] can be used to deal with
 clickjacking. So far we've gotten a lot of positive feedback to CSP
 and are in progress of implementing it in firefox. So it's a possible
 solution to this.

 Is Mozilla planning to run CSP through a usual standards body like the
 W3C, either before or after implementation?  If you plan to
 standardize it after implementation, why not before instead?  CSP
 looks really exciting, but I'm not clear on whether or when it will be
 standardized -- I've heard talk of implementing it, but not of
 standardizing it.

We've actually proposed it to the webapps list, but got little to no
response. I'm not sure if we at this time have anyone that would have
the resources to offer to be editor for a W3C CSP spec, if any of the
WGs there are interested to host it.

So in short, yes, we'd love to have it standardized, but so far
haven't found a path to make that practically happen.

But, as Mike said, we'd love to get feedback, and we'd love to get it
now. So far most of the feedback we've gotten has been looks
interesting which we take as a pretty good sign, but a little lacking
in detail :)

/ Jonas


Re: [whatwg] Clickjacking and CSRF

2009-07-16 Thread Aryeh Gregor
On Thu, Jul 16, 2009 at 4:25 PM, Jonas Sickingjo...@sicking.cc wrote:
 We've actually proposed it to the webapps list, but got little to no
 response. I'm not sure if we at this time have anyone that would have
 the resources to offer to be editor for a W3C CSP spec, if any of the
 WGs there are interested to host it.

 So in short, yes, we'd love to have it standardized, but so far
 haven't found a path to make that practically happen.

 But, as Mike said, we'd love to get feedback, and we'd love to get it
 now. So far most of the feedback we've gotten has been looks
 interesting which we take as a pretty good sign, but a little lacking
 in detail :)

As a web developer, I'd say it looks awesome.  It could allow at least
major web apps and big sites (i.e., those willing to put in the
effort) to become almost immune to XSS, while XSS in complicated web
apps seems to be as inevitable as death and taxes right now.  XSS is
to web apps right now kind of like what buffer overflows are to C:
probably there are some people or institutions that are careful enough
to *always* get it right, but there sure aren't many.

Of course, if only Mozilla implements it, it will be of limited value.
 I was concerned that none of the announcements said anything about
standardizing it or working with other browser vendors, just about
what Mozilla was doing.  I'm glad to hear that it's not intended to be
Mozilla-specific, and hope other browsers pick up on it.

report-uri would still be really useful even if only Mozilla
implemented the spec, as long as Firefox has good market share.
That's a particularly cool feature, I wouldn't have thought of it.

I guess this approach has pitfalls.  Every admin will have to manually
specify that they accept scripts from Analytics/their ad
provider/etc., etc.  I guess for web apps, they could still ship with
CSP enabled by default, and just require admins to add new script
links through some interface that automatically updates the policy.

What I'd really love to see is if all major web apps could at some
point ship with full CSP enabled by default.  I'm not clear yet on
whether it would work in practice, or if it would break too many
things and we'd realistically have to leave it opt-in.  I'm hoping
that report-uri would be a good solution to this: the app could have a
page that would automatically mail the admin with enough instructions
that they could fix the problem easily whenever one occurs.

Is there support in the spec for pinging the report-uri on violations,
but still allowing the violation to go through?  That could allow much
easier deployment, so that you could verify that your policy wasn't
blocking anything legitimate.  I don't see it anywhere, but I didn't
look very hard.

So those are my comments.  In short, I think the idea is great.  I can
pretty much guarantee that Wikimedia will be interested in trying it
out as soon as there are dev builds of Firefox that support it,
especially if we can have it report-only initially.


Re: [whatwg] Clickjacking and CSRF

2009-07-16 Thread Jonas Sicking
On Thu, Jul 16, 2009 at 2:25 PM, Aryeh Gregorsimetrical+...@gmail.com wrote:
 Is there support in the spec for pinging the report-uri on violations,
 but still allowing the violation to go through?  That could allow much
 easier deployment, so that you could verify that your policy wasn't
 blocking anything legitimate.  I don't see it anywhere, but I didn't
 look very hard.

I don't think so. I've cc'ed the relevant people that can answer.

/ Jonas


Re: [whatwg] Clickjacking and CSRF

2009-07-15 Thread Jonas Sicking
On Wed, Jul 15, 2009 at 5:26 PM, Ian Hicksoni...@hixie.ch wrote:

 There have been a number of discussions about clickjacking,
 X-Frame-Options, and other proposals.

 Nobody I've spoken to seems especially happy with X-Frame-Options, and
 none of the other proposals have yet gotten serious traction.

 I have therefore not added anything of this nature to the HTML5 spec yet.
 I propose that from a standardisation perspective, we continue to wait to
 get more implementation experience and document the end result once we
 are more confident that a long-term solution has been found.

 I recommend that people interested in this field work with browser vendors
 to get experimental implementations of their proposals, so that we can
 study their effects on Web content.

Note that Content Security Policies[1] can be used to deal with
clickjacking. So far we've gotten a lot of positive feedback to CSP
and are in progress of implementing it in firefox. So it's a possible
solution to this.

/ Jonas

[1] 
http://blog.mozilla.com/security/2009/06/19/shutting-down-xss-with-content-security-policy/


Re: [whatwg] Clickjacking and CSRF

2009-07-15 Thread Aryeh Gregor
On Wed, Jul 15, 2009 at 9:24 PM, Jonas Sickingjo...@sicking.cc wrote:
 Note that Content Security Policies[1] can be used to deal with
 clickjacking. So far we've gotten a lot of positive feedback to CSP
 and are in progress of implementing it in firefox. So it's a possible
 solution to this.

Is Mozilla planning to run CSP through a usual standards body like the
W3C, either before or after implementation?  If you plan to
standardize it after implementation, why not before instead?  CSP
looks really exciting, but I'm not clear on whether or when it will be
standardized -- I've heard talk of implementing it, but not of
standardizing it.


Re: [whatwg] Clickjacking and CSRF

2009-07-15 Thread Aryeh Gregor
On Wed, Jul 15, 2009 at 9:53 PM, Jeremy Orlowjor...@chromium.org wrote:
 Didn't Ian, 2 messages back, suggest that vendors experiment and bring their
 results back to the table at a later date?  Or has CSP never been discussed
 here?

I haven't seen it discussed here, but maybe it has been and I didn't
see or don't remember.  Although Ian might not want to consider it for
HTML 5 without vendor agreement, I'd think that a separate working
group could be set up (or an existing one appropriated) to work it out
with input from multiple vendors.  Implement-then-document surely
isn't an ideal procedure for large, complicated things like CSP.
There would be a lot of wasted effort if other vendors decide they
don't like the approach, and Mozilla might be more reluctant to invest
in other solutions after they've put a lot of work into CSP.

I might be overestimating the difficulty of implementing CSP, but the
spec page is more than 6000 words, and it's not even particularly
precise (at least not as precise as HTML 5 is).  X-Frame-Options is
about one paragraph to fully specify, and can't have been too hard to
implement -- vendors making up things like that independently (or
HttpOnly cookies, etc.) is a lot more reasonable.


Re: [whatwg] Clickjacking and CSRF

2009-02-23 Thread Sigbjørn Vik

On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a ruleset for 
how to express what is allowed and disallowed. Do you mean that clients should 
be using a URL list, or that servers should be using this particular grammar to 
decide which headers to send with their URLs?
For a domain wide policy file a document like this might work well though.


For cross-domain resources, this means that a browser would first have
to make a request with GET and without authentication tokens to get the
x-cross-domain-options settings from the resource. If the settings
allow, a second request may be made, if the second request would be
different. The result of last request are handed over to the document.


Have you considered using OPTIONS for the pre-flight request, similar to  
how Access Control for Cross-Site Requests does it?


http://www.w3.org/TR/access-control/#cross-site2


Good point. Trying to use OPTIONS for existing servers might break them, a GET 
might be safer. Then again, OPTIONS shouldn't break anything, GETs might have 
side-effects where OPTIONS don't, and an OPTIONS reply typically has a much 
smaller payload than a GET reply. In the case of a reply to a pre-flight 
request where the user agents has cookies but the server replies that contents 
are the same with or without cookies, an OPTIONS request would require two 
requests, a GET only one. OPTIONS is probably more in the spirit of HTTP though.

Either could work, the idea is the same. Which would be better would have to be 
researched empirically, but OPTIONS might be the better candidate.

--
Sigbjørn Vik
Quality Assurance
Opera Software




Re: [whatwg] Clickjacking and CSRF

2009-02-23 Thread Giorgio Maone

On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a 
ruleset for how to express what is allowed and disallowed. Do you mean 
that clients should be using a URL list, or that servers should be 
using this particular grammar to decide which headers to send with 
their URLs?
For a domain wide policy file a document like this might work well 
though. 

ABE is meant to be configured in 3 ways:

  1. With user-provided rules, deployed directly client-side
  2. With community-provided rules, downloaded periodically from a
 trusted repository
  3. As a site-wide policy deployed on the server side in a single
 file, much like crossdomain.xml

See http://hackademix.net/2008/12/20/introducing-abe/ and especially 
this http://hackademix.net/2008/12/20/introducing-abe/#comment-10165 
comment about site-provided rules and merging.

--
Giorgio

Sigbjørn Vik wrote, On 23/02/2009 11.42:

On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a 
ruleset for how to express what is allowed and disallowed. Do you mean 
that clients should be using a URL list, or that servers should be 
using this particular grammar to decide which headers to send with 
their URLs?
For a domain wide policy file a document like this might work well 
though.



For cross-domain resources, this means that a browser would first have
to make a request with GET and without authentication tokens to get the
x-cross-domain-options settings from the resource. If the settings
allow, a second request may be made, if the second request would be
different. The result of last request are handed over to the document.


Have you considered using OPTIONS for the pre-flight request, similar 
to how Access Control for Cross-Site Requests does it?


http://www.w3.org/TR/access-control/#cross-site2


Good point. Trying to use OPTIONS for existing servers might break 
them, a GET might be safer. Then again, OPTIONS shouldn't break 
anything, GETs might have side-effects where OPTIONS don't, and an 
OPTIONS reply typically has a much smaller payload than a GET reply. 
In the case of a reply to a pre-flight request where the user agents 
has cookies but the server replies that contents are the same with or 
without cookies, an OPTIONS request would require two requests, a GET 
only one. OPTIONS is probably more in the spirit of HTTP though.


Either could work, the idea is the same. Which would be better would 
have to be researched empirically, but OPTIONS might be the better 
candidate.






Re: [whatwg] Clickjacking and CSRF

2009-02-23 Thread Sigbjørn Vik

On Mon, 23 Feb 2009 14:23:40 +0100, Giorgio Maone g.ma...@informaction.com 
wrote:


On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a
ruleset for how to express what is allowed and disallowed. Do you mean
that clients should be using a URL list, or that servers should be
using this particular grammar to decide which headers to send with
their URLs?
For a domain wide policy file a document like this might work well
though.

ABE is meant to be configured in 3 ways:

   1. With user-provided rules, deployed directly client-side
   2. With community-provided rules, downloaded periodically from a
  trusted repository
   3. As a site-wide policy deployed on the server side in a single
  file, much like crossdomain.xml

See http://hackademix.net/2008/12/20/introducing-abe/ and especially
this http://hackademix.net/2008/12/20/introducing-abe/#comment-10165
comment about site-provided rules and merging.


Yes, a domain wide policy file might be good to have, but it could not entirely 
replace having a header settable for a single resource, not all web authors 
have access to the root, so it would have to come as an addition, an optional 
replace.

If a domain wide policy file is used, it would make sense to have it in a 
format which can be distributed and applied locally, so users can patch web 
sites that don't do it themselves. ABE looks like a good candidate for all of 
this. A good candidate might also have to be implementable by the server, so 
that a server can look at the policy file, and determine which headers to send 
for any particular resource, including which resources to send no headers for 
at all. Presumably ABE would work for that too.

--
Sigbjørn Vik
Quality Assurance
Opera Software




Re: [whatwg] Clickjacking and CSRF

2009-02-20 Thread Giorgio Maone

Sigbjørn Vik wrote, On 20/02/2009 15.46:
There is currently little protection against clickjacking, the 
x-frame-options is the first attempt.

Nope, it's the second and weakest:
http://hackademix.net/2008/10/08/hello-clearclick-goodbye-clickjacking/
http://noscript.net/faq#clearclick
--
Giorgio Maone
http://hackademix.net


Re: [whatwg] Clickjacking and CSRF

2009-02-20 Thread Sigbjørn Vik

On Fri, 20 Feb 2009 16:00:09 +0100, Giorgio Maone g.ma...@informaction.com 
wrote:


Sigbjørn Vik wrote, On 20/02/2009 15.46:
There is currently little protection against clickjacking, the  
x-frame-options is the first attempt.

Nope, it's the second and weakest:
http://hackademix.net/2008/10/08/hello-clearclick-goodbye-clickjacking/
http://noscript.net/faq#clearclick


I stand corrected. I was thinking too narrow-mindedly, from a browser vendor 
perspective. Frame busting is another existing alternative.

--
Sigbjørn Vik
Quality Assurance
Opera Software




Re: [whatwg] Clickjacking and CSRF

2009-02-20 Thread Bil Corry
Sigbjørn Vik wrote on 2/20/2009 8:46 AM: 
 One proposed way of doing this would be a single header, of the form:
 x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
 allow=*.opera.com,example.net;
 This incorporates the idea from the IE team, and extends on it.

Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


 For cross-domain resources, this means that a browser would first have
 to make a request with GET and without authentication tokens to get the
 x-cross-domain-options settings from the resource. If the settings
 allow, a second request may be made, if the second request would be
 different. The result of last request are handed over to the document.

Have you considered using OPTIONS for the pre-flight request, similar to how 
Access Control for Cross-Site Requests does it?

http://www.w3.org/TR/access-control/#cross-site2



- Bil