Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Adam Barth
On Mon, Oct 26, 2009 at 6:11 PM, Daniel Veditz dved...@mozilla.com wrote:
 They have already opted in by adding the CSP header. Once they've
 opted-in to our web-as-we-wish-it-were they have to opt-out of the
 restrictions that are too onerous for their site.

I understand the seductive power of secure-by-default here.  It's
important to understand what we're giving up in terms of complexity
and extensibility.

 We feel
 extraordinarily strongly that sites should have to explicitly say they
 want to run inline-script, like signing a waiver that you're going
 against medical advice. The only thing that is likely to deter us is
 releasing a test implementation and then crashing and burning while
 trying to implement a reasonable test site like AMO or MDC or the
 experiences of other web developers doing the same.

This statement basically forecloses further discussion because it does
not advance a technical argument that I can respond to.  In this
forum, you are the king and I am but a guest.

My technical argument is as follows.  I think that CSP would be better
off with a policy language where each directive was purely subtractive
because that design would have a number of simplifying effects:

1) Forward and backward compatibility.  As long as sites did not use
the features blocked by their CSP directives, their sites would
function correctly in partial / future implementations of CSP.

2) Modularity.  We would be free to group the directives into whatever
modules we liked because there would be no technical interdependence.

3) Trivial Combination.  Instead of the current elaborate algorithm
for combining policies, we could simply concatenate the directives.
An attacker who could inject a Content-Security-Policy header could
then only further reduce his/her privileges.

4) Syntactic Simplicity.  Instead of two combination operators, ;
for union and , for intersection, we could simply use , and match
standard HTTP header syntax.

Balancing against these pros, the con seem to be that we hope the
additive, opt-out syntax will prod web developers into realizing that
adding script-src inline to the tutorial code they copy-and-paste is
more dangerous than removing block-xss.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Devdatta
Hi

There are two threads running in parallel here:

1) Should blocking XSS be default behaviour of adding a
X-Content-Security-Policy? (instead of the straw man proposal where a
additional 'block-xss' would be required )
2) Should the result of blocking XSS also cause eval and inline
scripts to be disabled?

If 1 is the case, then blocking eval and inline scripts by default is
imho unacceptable. The reasons are the same as Adam succinctly pointed
out in his ' Forward and backward compatibility ' bullet in the
previous mail.

But if to enable XSS protection, the user types in block-xss, then I
think Brandon argument makes sense. block-xss should block XSS , which
requires us to disable eval and inline scripts. But if for
compatibility the user wants to continue supporting them , he should
explicity add support for them with say 'allow-eval'. With a
block-eval directive, the correct policy would always be 'block-xss
block-eval' which doesn't make sense to me if we are hoping that eval
support would just be a stop gap while the web admins figure out how
to get by without it.


Regards
Devdatta

2009/10/27 Adam Barth abarth-mozi...@adambarth.com:
 On Mon, Oct 26, 2009 at 6:11 PM, Daniel Veditz dved...@mozilla.com wrote:
 They have already opted in by adding the CSP header. Once they've
 opted-in to our web-as-we-wish-it-were they have to opt-out of the
 restrictions that are too onerous for their site.

 I understand the seductive power of secure-by-default here.  It's
 important to understand what we're giving up in terms of complexity
 and extensibility.

 We feel
 extraordinarily strongly that sites should have to explicitly say they
 want to run inline-script, like signing a waiver that you're going
 against medical advice. The only thing that is likely to deter us is
 releasing a test implementation and then crashing and burning while
 trying to implement a reasonable test site like AMO or MDC or the
 experiences of other web developers doing the same.

 This statement basically forecloses further discussion because it does
 not advance a technical argument that I can respond to.  In this
 forum, you are the king and I am but a guest.

 My technical argument is as follows.  I think that CSP would be better
 off with a policy language where each directive was purely subtractive
 because that design would have a number of simplifying effects:

 1) Forward and backward compatibility.  As long as sites did not use
 the features blocked by their CSP directives, their sites would
 function correctly in partial / future implementations of CSP.

 2) Modularity.  We would be free to group the directives into whatever
 modules we liked because there would be no technical interdependence.

 3) Trivial Combination.  Instead of the current elaborate algorithm
 for combining policies, we could simply concatenate the directives.
 An attacker who could inject a Content-Security-Policy header could
 then only further reduce his/her privileges.

 4) Syntactic Simplicity.  Instead of two combination operators, ;
 for union and , for intersection, we could simply use , and match
 standard HTTP header syntax.

 Balancing against these pros, the con seem to be that we hope the
 additive, opt-out syntax will prod web developers into realizing that
 adding script-src inline to the tutorial code they copy-and-paste is
 more dangerous than removing block-xss.

 Adam
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Daniel Veditz
On 10/27/09 2:33 AM, Adam Barth wrote:
 I understand the seductive power of secure-by-default here.

If only she loved me back.

 This statement basically forecloses further discussion because it does
 not advance a technical argument that I can respond to.  In this
 forum, you are the king and I am but a guest.

I don't think we're having a technical argument, and we're not getting
the feedback we need to break the impasse in this limited forum. Either
syntax can be made to express the same set of current restrictions.
You're arguing for extensible syntax, and I'm arguing for what will best
encourage the most web authors to do the right thing.

An argument about whether your syntax is or is not more extensible can
at least be made on technical merits, but what I really want is feedback
from potential web app authors about which approach is more intuitive
and useful to them. Those folks aren't here, and I don't know how to
reach them.

At a technical level your approach appears to be a blacklist. If I'm
understanding you correctly, if there's an empty CSP header then there's
no restriction whatsoever on the page. In our version it'd be a
locked-down page with a default inability to load source from anywhere.
If the web author has left something out they will know because the page
will not work. I'd rather have that than a web author thinking they're
safe when CSP isn't actually turned on for their page.

The bottom line, though, is I'm in favor of anything that gets more web
sites and more browsers to support the concept.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Adam Barth
On Tue, Oct 27, 2009 at 12:39 PM, Daniel Veditz dved...@mozilla.com wrote:
 I don't think we're having a technical argument, and we're not getting
 the feedback we need to break the impasse in this limited forum.

I agree that we're not making progress in this discussion.

At a high level, the approach of letting sites to restrict the
privileges of their own content is a rich space for security
mechanisms.  My opinion is that the current CSP design is overly
complex for the use cases it supports and insufficiently flexible as a
platform for addressing future use cases.  If I find the time, I'll
send along a full design that tries to improve these aspects along the
lines I've suggested in the foregoing discussion.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Brandon Sterne

On 10/27/2009 02:33 AM, Adam Barth wrote:

My technical argument is as follows.  I think that CSP would be better
off with a policy language where each directive was purely subtractive
because that design would have a number of simplifying effects:


I couldn't find a comment that summarizes the model you are proposing so 
I'll try to recreate your position from memory of our last phone 
conversation.  Please correct me where I'm wrong.


I believe you advocate a model where a site specifies the directives it 
knows/cares about, and everything else is allowed.  This model would 
make the default allow directive unnecessary.  The main idea is to 
allow sites to restrict the things it knows about and not have to worry 
about inadvertently blocking things it doesn't consider a risk.


My main objection to this approach is that it turns the whitelist 
approach we started with into a hybrid whitelist/blacklist.  The 
proposal doesn't support the simple use case of a site saying:
I only want the following things (e.g. script and images from myself). 
 Disallow everything else.


Under your proposal, this site needs to explicitly opt-out of every 
directive, including any new directives that get added in the future. 
We're essentially forcing sites to maintain an exhaustive blacklist for 
all time in order to avoid us (browsers) accidentally blocking things in 
the future that the site forgot to whitelist.



1) Forward and backward compatibility.  As long as sites did not use
the features blocked by their CSP directives, their sites would
function correctly in partial / future implementations of CSP.


Under your proposed model, a site will continue to function correctly 
only in the sense that nothing will be blocked in newer implementations 
of CSP that wouldn't also have been blocked in a legacy implementation. 
 From my perspective, the blocking occurs when something unexpected by 
the site was included in the page.  In our model, the newer 
implementation, while potentially creating an inconsistency with the 
older version, has also potentially blocked an attack.


Are you suggesting that a blocked resource is more likely to have come 
from a web developer who forgot to update the CSP when s/he added new 
content than it is to have been injected by an attacker?  This seems 
like a dangerous assumption.  All we are getting, in this case, is 
better consistency in behavior from CSP 
implementation-to-implementation, but not better security.



2) Modularity.  We would be free to group the directives into whatever
modules we liked because there would be no technical interdependence.


I actually don't see how opt-in vs. opt-out has any bearing at all on 
module interdependence.  Maybe you can provide an example?


Let's also not forget that CSP modularity really only helps browser 
vendors.  From the perspective of websites, CSP modules are just one 
more thing that they have to keep track of in terms of which browsers 
support which modules.  I support the idea of making it easier for other 
browser vendors to implement CSP piecemeal, but our primary motivation 
should remain making the lives of websites and their users better.



3) Trivial Combination.  Instead of the current elaborate algorithm
for combining policies, we could simply concatenate the directives.
An attacker who could inject a Content-Security-Policy header could
then only further reduce his/her privileges.


In the case of an injected header, this is already the case now.  We 
intersect both policy sets, resulting in a combined policy more 
restrictive than either of the two separate policies.


If we are talking about an attacker who can inject an additional 
directive into an existing CSP header then, yes, the attacker could 
relax the policy intended to be set by the site.  I'm not sure how 
much I care about this case.



4) Syntactic Simplicity.  Instead of two combination operators, ;
for union and , for intersection, we could simply use , and match
standard HTTP header syntax.


Okay, sure.


Balancing against these pros, the con seem to be that we hope the
additive, opt-out syntax will prod web developers into realizing that
adding script-src inline to the tutorial code they copy-and-paste is
more dangerous than removing block-xss.


Those seem equivalent to me, so I'm not sure which model your example 
favors.


In general, I'm slightly skeptical of the view that we need to base our 
design around the fact that admins will copy-paste from tutorials. 
Sure, this will happen in practice, but what is the probability that 
such a site is a high value target for an attacker, and by extension how 
important is it that such a site gets CSP right?  Remember, a site 
cannot make their security profile any worse with CSP than without it.


I do want CSP to be easy to get right.  I should do some homework and 
collect some stats on real world websites to support the following 
claim, but I still maintain that a HUGE number of sites will be 

Opt-in versus opt-out (was Re: CSRF Module)

2009-10-27 Thread Adam Barth
On Tue, Oct 27, 2009 at 3:54 PM, Brandon Sterne bste...@mozilla.com wrote:
 I couldn't find a comment that summarizes the model you are proposing so
 I'll try to recreate your position from memory of our last phone
 conversation.

I'll try to find the time to write a complete specification.

 I believe you advocate a model where a site specifies the directives it
 knows/cares about, and everything else is allowed.

The design I suggest is simpler than this.  The site just lists which
restrictions it would like applied to its content.  Each directive is
purely subtractive: adding more directives only further restricts what
the site can do.

 This model would make
 the default allow directive unnecessary.  The main idea is to allow sites
 to restrict the things it knows about and not have to worry about
 inadvertently blocking things it doesn't consider a risk.

That's correct.  I also think it makes sense to package the
restrictions into meaningful directives that address specific
threats.

 My main objection to this approach is that it turns the whitelist approach
 we started with into a hybrid whitelist/blacklist.

The design is a pure blacklist.  Just like turning off unused
operating system services, content restrictions should let web
developers turn off features they aren't using.

 The proposal doesn't
 support the simple use case of a site saying:
 I only want the following things (e.g. script and images from myself).
  Disallow everything else.

The problem is that everything else is ill-defined.  Should we turn
off canvas?  That's a thing that's not a script or an image from
myself.  CSP, as currently design, as a hard-coded universe of
things it cares about, which limits its use as a platform for
addressing future use cases.  It is a poor protocol that doesn't plan
for future extensibility.

 Under your proposal, this site needs to explicitly opt-out of every
 directive, including any new directives that get added in the future.

Not really.  When we invent new directives, sites can opt in to them
by adding them to their policy.  Just like you can opt in to new HTML5
features by adding new HTML tags to your document.

 We're
 essentially forcing sites to maintain an exhaustive blacklist for all time
 in order to avoid us (browsers) accidentally blocking things in the future
 that the site forgot to whitelist.

Web developers are free to ignore CSP directives that mitigate threats
they don't care about.  There is no need for web developers to
maintain an exhaustive list of anything.

 Under your proposed model, a site will continue to function correctly only
 in the sense that nothing will be blocked in newer implementations of CSP
 that wouldn't also have been blocked in a legacy implementation.

That's correct.  The semantics of a given CSP policy does not change
as new directives are invented and added to the language, just as the
semantics of an old HTML document doesn't change just because we
invented the canvas tag.

 From my
 perspective, the blocking occurs when something unexpected by the site was
 included in the page.  In our model, the newer implementation, while
 potentially creating an inconsistency with the older version, has also
 potentially blocked an attack.

You're extremely focused on load resources and missing the bigger picture.

 Are you suggesting that a blocked resource is more likely to have come from
 a web developer who forgot to update the CSP when s/he added new content
 than it is to have been injected by an attacker?

I'm not suggesting this at all.  Nothing in my argument has to do with
probabilities.

 This seems like a
 dangerous assumption.  All we are getting, in this case, is better
 consistency in behavior from CSP implementation-to-implementation, but not
 better security.

Consistency between implementation is essential.  Mitigating important
threats is also essential.  Nether is more important than the other.

 2) Modularity.  We would be free to group the directives into whatever
 modules we liked because there would be no technical interdependence.

 I actually don't see how opt-in vs. opt-out has any bearing at all on module
 interdependence.  Maybe you can provide an example?

Sure.  Suppose I want to implement enough of CSP to let web developers
protect themselves from Type-I and Type-II XSS (e.g., because I view
that the lion's share of the benefit).  How can I do that in the
current CSP design without affecting the targets of XMLHttpRequest?
Surely you agree that restricting the targets of XMLHttpRequests has
little (if anything) to do with mitigating Type-I or Type-II XSS, yet
these parts of CSP are so interdependent that I'm forced to implement
all of them or none of them.

 Let's also not forget that CSP modularity really only helps browser vendors.

Complexity hurts everyone.  The current monolithic CSP design is
overly complex for the security it provides.  There are much simpler
designs that provide the same security benefits.

 From the 

Re: Opt-in versus opt-out (was Re: CSRF Module)

2009-10-27 Thread Brandon Sterne
On 10/27/09 4:32 PM, Adam Barth wrote:
 On Tue, Oct 27, 2009 at 3:54 PM, Brandon Sterne bste...@mozilla.com wrote:
 My main objection to this approach is that it turns the whitelist approach
 we started with into a hybrid whitelist/blacklist.
 
 The design is a pure blacklist.  Just like turning off unused
 operating system services, content restrictions should let web
 developers turn off features they aren't using.

I find it rather surreal that we are arguing over whether to implement a
whitelist or a blacklist in CSP.  I am strongly in the whitelist camp
and I have seen no strong evidence that reversing the approach is the
right way to go.  Are there others who honestly feel a blacklist is a
wise approach?

  The proposal doesn't
 support the simple use case of a site saying:
 I only want the following things (e.g. script and images from myself).
  Disallow everything else.
 
 The problem is that everything else is ill-defined.

I disagree completely.  It's the things I haven't explicitly approved.

  Should we turn
 off canvas?  That's a thing that's not a script or an image from
 myself. 

So are objects, stylesheets and every other type of content we have
enumerated a policy directive for.  We can add other directives if we
think there is value in doing so for specific browser capabilities.

 CSP, as currently design, as a hard-coded universe of
 things it cares about, which limits its use as a platform for
 addressing future use cases.  It is a poor protocol that doesn't plan
 for future extensibility.

The list of things needs to be hard coded whether or not we allow
sites to opt-in or opt-out of using them.

Do you have any support for your claim that we don't plan for future
extensibility?  Our proposal is clear that browsers should skip over
directives they don't understand which allows for new directives to be
added in the future.

 Under your proposal, this site needs to explicitly opt-out of every
 directive, including any new directives that get added in the future.
 
 Not really.  When we invent new directives, sites can opt in to them
 by adding them to their policy.  Just like you can opt in to new HTML5
 features by adding new HTML tags to your document.

Remember the use case I gave as an example.  Site wants X and Y and
nothing more.  In your model, not only _can_ sites add new policy as we
add new directives, they _have to_ if they want to restrict themselves
to X and Y.

 We're
 essentially forcing sites to maintain an exhaustive blacklist for all time
 in order to avoid us (browsers) accidentally blocking things in the future
 that the site forgot to whitelist.
 
 Web developers are free to ignore CSP directives that mitigate threats
 they don't care about.  There is no need for web developers to
 maintain an exhaustive list of anything.

Again, they do if they want to strictly whitelist the types of content
in their site.

 Under your proposed model, a site will continue to function correctly only
 in the sense that nothing will be blocked in newer implementations of CSP
 that wouldn't also have been blocked in a legacy implementation.
 
 That's correct.  The semantics of a given CSP policy does not change
 as new directives are invented and added to the language, just as the
 semantics of an old HTML document doesn't change just because we
 invented the canvas tag.

We're talking about _unintended_ content being injected in the pages.
If browsers add some risky new feature (and I'm not saying canvas is
that) then a site which doesn't use the feature shouldn't have to update
their policy to stay opted-out.  They never opted-in in the first place.
 Think Principle of Least Surprise.

  From my
 perspective, the blocking occurs when something unexpected by the site was
 included in the page.  In our model, the newer implementation, while
 potentially creating an inconsistency with the older version, has also
 potentially blocked an attack.
 
 You're extremely focused on load resources and missing the bigger picture.

You did not address my point which was one example of how opting-in to
features provides better security.

 Are you suggesting that a blocked resource is more likely to have come from
 a web developer who forgot to update the CSP when s/he added new content
 than it is to have been injected by an attacker?
 
 I'm not suggesting this at all.  Nothing in my argument has to do with
 probabilities.

Okay, I'll pose the same question a different way: do you think it is
more important to avoid false positives (allow harmful content through)
than it is to avoid false negatives (block benign content) in the
absence of an explicit policy?

  This seems like a
 dangerous assumption.  All we are getting, in this case, is better
 consistency in behavior from CSP implementation-to-implementation, but not
 better security.
 
 Consistency between implementation is essential.  Mitigating important
 threats is also essential.  Nether is more important than the other.

I disagree.  I