Re: Firefox Security Issue

2011-07-25 Thread Brandon Sterne
On 07/25/2011 10:18 AM, Fluit (CU) wrote:
 Thanx for Mozilla team's hard work and fine add-ons
 
 It came under my attention that certain cookies is using Internet
 Explorer cookies via Firefox
 I am sure of it, but don't know how to prove it
 
 Is it possible that any Firefox or add-ons Team can make a stop to this
 I NEVER use IE, and somehow cookies is adding up there
 
 I did make certain firewall adjustments to counter that
 
 Your help in this regard would be appreciated
 
 END
 *AN ADD-ON OF FIREFOX THAT DISABLE IE COOKIE FUNCTION COMPLETELY*
 
 Thanx Team
 
 
 Fluit
 

Hi Fluit,

I'm not sure how you came to the conclusion that Firefox or an add-on is
responsible for putting IE cookies on your system.  It is very likely
that one of the many other Windows applications that use IE's rendering
engine made a request to the Web that resulted in cookies for that web
site being stored.  I hope that helps.

Regards,

Brandon Sterne
Mozilla Security Group

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP and contents of script tag

2011-03-22 Thread Brandon Sterne
+CC public-web-security

Hi Dave,

Thanks for the question.  I'm CCing the group that is in the process of
standardizing CSP.

I have two points to make in response:

1. I agree that the behavior for how the resulting script node is
created should be specified.  I personally don't see any harm in
allowing the text inside the script node to be created.

2. There are other ways you can provide an island of data, as you say,
without using the text section of a script element.  Script will always
have access to arbitrary DOM nodes, so JSON-encoded data can be placed
there and parsed with JSON.parse which Browserscope claims is supported
by basically every major browser [1].

Cheers,
Brandon

[1] http://www.browserscope.org/?category=security

On 03/22/2011 05:33 AM, Dave wrote:
 The CSP seems like it is going to be really useful.  I have been
 looking into specifying an approach to javascript that is compatible
 with the CSP but also takes into account other things that people are
 concerned about when designing web pages as well.  One of those things
 is giving javascript access to a 'data island' in the HTML.
 Originally I was thinking of a hidden div element with JSON encoded
 data that javascript could read.  Further research revealed some scope
 for this in HTML5, by means of specifying an in-line script tag with a
 type of application/json, see 
 http://dev.w3.org/html5/spec/Overview.html#script.
 
 The CSP specification quite clearly states what a User Agent is
 supposed to not execute any in-line script.  But what exactly should
 it do with the in-line script content?  Should it make it available to
 the DOM?  Should it make it available to the DOM dependent on its type
 attribute?  What does blocking an in-line script tag even mean when
 the type of the tag isn't something the User Agent would execute
 anyway?
 
 It would be nice to see this called out in the CSP so we don't end up
 with different User Agents doing different things with in-line script
 (and other tag) content.  Clearly the security implications of what
 should happen to the contents of script tags also needs to be
 considered.
 
 Dave
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP: JSON or XML for report-uri?

2010-06-14 Thread Brandon Sterne
Yes, this was updated in the spec but I forgot to update this document
as well.  Will do so shortly.

Thanks,
Brandon


On 06/11/2010 11:38 PM, Bil Corry wrote:
 I noticed that the details page located here:
 
   
 http://people.mozilla.org/~bsterne/content-security-policy/details.html#report-uri
 
 states that the violation report is an XML document -- e.g.:
 
   Sample report:
 
   csp-report
 requestGET /index.html HTTP/1.1/request
 headersHost: example.com
  User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9) 
 Gecko/2008061015 Firefox/3.0
  Accept: 
 text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
 /headers
 blockedhttp://evil.com/some_image.png/blocked
   /csp-report
 
 But the spec itself states that it's JSON data.  I'm guessing JSON was 
 selected over XML?
 
 
 - Bil
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


CSP - Cookie leakage via report-uri

2010-06-08 Thread Brandon Sterne
Hello all,

I want to bring up an issue that was raised regarding the proposed
report-uri feature of Content Security Policy feature.

If you assume the following two flaws are present on a legacy server:
1. Attacker controls the value of the CSP header
2. A request-processing script on the server which doesn't validate
  POST requests it receives but simply places the POST data in a
  location accessible to the attacker

Then CSP introduces a new attack surface that can be used to steal
cookies or other authentication headers.  #2 above seems rather
contrived at first blush, but think of a Pastebin-type application that
blindly processes POSTs into publicly available content.  (Pastebin
itself is not vulnerable to this attack, since it validates the format
of the POSTs).

(Note that #1 doesn't require arbitrary HTTP response header injection
or HTTP response splitting.  The attacker must control only the value of
the policy header.)

One we can address this is to suppress the value of any auth headers
that were present in the violation-generating request from the report
POST body.  This of course reduces the utility of the reports for server
debugging, but does provide a guarantee that Cookie and related
information won't ever be leaked to attackers through the reports.

Does this sound like the right approach?

Cheers,
Brandon

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP : What does allow * mean?

2010-03-12 Thread Brandon Sterne
On 03/12/2010 04:38 PM, Nick Kralevich wrote:
 While reading through the Formal Policy Syntax of the CSP, it occurred to me
 that the meaning of allow * might be confusing.  The wildcard seems to
 correspond to a hostname only, and not to a scheme or port.

Another great question.  I've made a change to the policy syntax that I
hope will clarify things.

source::= 'self'
  | *
  | schemehostport

What this means is that * by itself implies inherited-scheme//*:* but
* can still be used as a wildcard for hostname, port, or both.  We
didn't think it was wise to allow sites to wildcard schemes.  It doesn't
seem like too much to ask sites to enumerate the schemes they want to use.

   X-Content-Security-Policy: allow 'self'; img-src *; ...
 
 will allow an image from anywhere.  However, my reading of the syntax is
 that it will only allow images from the same scheme and default port.  (for
 example, an HTTP page couldn't include an image from an HTTPS source)
 
 1) Is my reading of allow * correct?

With this change to the spec, the above policy would now allow images
from the same scheme, any host, and any port.

 2) How does one specify a wildcard for any protocol?

I don't think we should allow that.  Do you have a reason to believe we
should?

Thanks very much for all the detailed feedback.  It's very much appreciated.

Cheers,
Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


A basis for comparing CSP Models

2009-10-29 Thread Brandon Sterne
People generally agree that content restrictions are a good idea and
will be a useful tool for websites.  Various designs have emerged with
different approaches as to how restrictions should be defined by sites
and applied by browsers.  I would like to propose a framework with which
to evaluate and compare the designs to help guide us to a common solution.

The following can be used to determine the costs and benefits of any
particular model for content restrictions:

1. How flexible is the model?  How many different use cases does the
model support?  Does the model allow sites to keep their baseline
functionality intact?

2. How easy is the model to implement for web sites? How much
specialized knowledge is required by admins?

3. What will the process of developing an appropriate policy look like
for a given model?

4. How easy does the model make it for an organization to reason about
the correctness or optimality of their policy?

5. How will the model fit into organizations' existing workflows?  For
example, how easily will organizations who currently perform positive or
negative testing incorporate the model?

6. How extensible is the model? How will the model handle future changes
such as the addition of a new directive, changes in the semantics of an
existing directive (e.g. script-src now restricts plugins'
scriptability), or a change in default behavior (inline style now
blocked by default)?

Please feel free to add any additional criteria that seem appropriate.

Cheers,
Brandon

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Brandon Sterne

On 10/27/2009 02:33 AM, Adam Barth wrote:

My technical argument is as follows.  I think that CSP would be better
off with a policy language where each directive was purely subtractive
because that design would have a number of simplifying effects:


I couldn't find a comment that summarizes the model you are proposing so 
I'll try to recreate your position from memory of our last phone 
conversation.  Please correct me where I'm wrong.


I believe you advocate a model where a site specifies the directives it 
knows/cares about, and everything else is allowed.  This model would 
make the default allow directive unnecessary.  The main idea is to 
allow sites to restrict the things it knows about and not have to worry 
about inadvertently blocking things it doesn't consider a risk.


My main objection to this approach is that it turns the whitelist 
approach we started with into a hybrid whitelist/blacklist.  The 
proposal doesn't support the simple use case of a site saying:
I only want the following things (e.g. script and images from myself). 
 Disallow everything else.


Under your proposal, this site needs to explicitly opt-out of every 
directive, including any new directives that get added in the future. 
We're essentially forcing sites to maintain an exhaustive blacklist for 
all time in order to avoid us (browsers) accidentally blocking things in 
the future that the site forgot to whitelist.



1) Forward and backward compatibility.  As long as sites did not use
the features blocked by their CSP directives, their sites would
function correctly in partial / future implementations of CSP.


Under your proposed model, a site will continue to function correctly 
only in the sense that nothing will be blocked in newer implementations 
of CSP that wouldn't also have been blocked in a legacy implementation. 
 From my perspective, the blocking occurs when something unexpected by 
the site was included in the page.  In our model, the newer 
implementation, while potentially creating an inconsistency with the 
older version, has also potentially blocked an attack.


Are you suggesting that a blocked resource is more likely to have come 
from a web developer who forgot to update the CSP when s/he added new 
content than it is to have been injected by an attacker?  This seems 
like a dangerous assumption.  All we are getting, in this case, is 
better consistency in behavior from CSP 
implementation-to-implementation, but not better security.



2) Modularity.  We would be free to group the directives into whatever
modules we liked because there would be no technical interdependence.


I actually don't see how opt-in vs. opt-out has any bearing at all on 
module interdependence.  Maybe you can provide an example?


Let's also not forget that CSP modularity really only helps browser 
vendors.  From the perspective of websites, CSP modules are just one 
more thing that they have to keep track of in terms of which browsers 
support which modules.  I support the idea of making it easier for other 
browser vendors to implement CSP piecemeal, but our primary motivation 
should remain making the lives of websites and their users better.



3) Trivial Combination.  Instead of the current elaborate algorithm
for combining policies, we could simply concatenate the directives.
An attacker who could inject a Content-Security-Policy header could
then only further reduce his/her privileges.


In the case of an injected header, this is already the case now.  We 
intersect both policy sets, resulting in a combined policy more 
restrictive than either of the two separate policies.


If we are talking about an attacker who can inject an additional 
directive into an existing CSP header then, yes, the attacker could 
relax the policy intended to be set by the site.  I'm not sure how 
much I care about this case.



4) Syntactic Simplicity.  Instead of two combination operators, ;
for union and , for intersection, we could simply use , and match
standard HTTP header syntax.


Okay, sure.


Balancing against these pros, the con seem to be that we hope the
additive, opt-out syntax will prod web developers into realizing that
adding script-src inline to the tutorial code they copy-and-paste is
more dangerous than removing block-xss.


Those seem equivalent to me, so I'm not sure which model your example 
favors.


In general, I'm slightly skeptical of the view that we need to base our 
design around the fact that admins will copy-paste from tutorials. 
Sure, this will happen in practice, but what is the probability that 
such a site is a high value target for an attacker, and by extension how 
important is it that such a site gets CSP right?  Remember, a site 
cannot make their security profile any worse with CSP than without it.


I do want CSP to be easy to get right.  I should do some homework and 
collect some stats on real world websites to support the following 
claim, but I still maintain that a HUGE number of sites will be 

Re: Opt-in versus opt-out (was Re: CSRF Module)

2009-10-27 Thread Brandon Sterne
On 10/27/09 4:32 PM, Adam Barth wrote:
 On Tue, Oct 27, 2009 at 3:54 PM, Brandon Sterne bste...@mozilla.com wrote:
 My main objection to this approach is that it turns the whitelist approach
 we started with into a hybrid whitelist/blacklist.
 
 The design is a pure blacklist.  Just like turning off unused
 operating system services, content restrictions should let web
 developers turn off features they aren't using.

I find it rather surreal that we are arguing over whether to implement a
whitelist or a blacklist in CSP.  I am strongly in the whitelist camp
and I have seen no strong evidence that reversing the approach is the
right way to go.  Are there others who honestly feel a blacklist is a
wise approach?

  The proposal doesn't
 support the simple use case of a site saying:
 I only want the following things (e.g. script and images from myself).
  Disallow everything else.
 
 The problem is that everything else is ill-defined.

I disagree completely.  It's the things I haven't explicitly approved.

  Should we turn
 off canvas?  That's a thing that's not a script or an image from
 myself. 

So are objects, stylesheets and every other type of content we have
enumerated a policy directive for.  We can add other directives if we
think there is value in doing so for specific browser capabilities.

 CSP, as currently design, as a hard-coded universe of
 things it cares about, which limits its use as a platform for
 addressing future use cases.  It is a poor protocol that doesn't plan
 for future extensibility.

The list of things needs to be hard coded whether or not we allow
sites to opt-in or opt-out of using them.

Do you have any support for your claim that we don't plan for future
extensibility?  Our proposal is clear that browsers should skip over
directives they don't understand which allows for new directives to be
added in the future.

 Under your proposal, this site needs to explicitly opt-out of every
 directive, including any new directives that get added in the future.
 
 Not really.  When we invent new directives, sites can opt in to them
 by adding them to their policy.  Just like you can opt in to new HTML5
 features by adding new HTML tags to your document.

Remember the use case I gave as an example.  Site wants X and Y and
nothing more.  In your model, not only _can_ sites add new policy as we
add new directives, they _have to_ if they want to restrict themselves
to X and Y.

 We're
 essentially forcing sites to maintain an exhaustive blacklist for all time
 in order to avoid us (browsers) accidentally blocking things in the future
 that the site forgot to whitelist.
 
 Web developers are free to ignore CSP directives that mitigate threats
 they don't care about.  There is no need for web developers to
 maintain an exhaustive list of anything.

Again, they do if they want to strictly whitelist the types of content
in their site.

 Under your proposed model, a site will continue to function correctly only
 in the sense that nothing will be blocked in newer implementations of CSP
 that wouldn't also have been blocked in a legacy implementation.
 
 That's correct.  The semantics of a given CSP policy does not change
 as new directives are invented and added to the language, just as the
 semantics of an old HTML document doesn't change just because we
 invented the canvas tag.

We're talking about _unintended_ content being injected in the pages.
If browsers add some risky new feature (and I'm not saying canvas is
that) then a site which doesn't use the feature shouldn't have to update
their policy to stay opted-out.  They never opted-in in the first place.
 Think Principle of Least Surprise.

  From my
 perspective, the blocking occurs when something unexpected by the site was
 included in the page.  In our model, the newer implementation, while
 potentially creating an inconsistency with the older version, has also
 potentially blocked an attack.
 
 You're extremely focused on load resources and missing the bigger picture.

You did not address my point which was one example of how opting-in to
features provides better security.

 Are you suggesting that a blocked resource is more likely to have come from
 a web developer who forgot to update the CSP when s/he added new content
 than it is to have been injected by an attacker?
 
 I'm not suggesting this at all.  Nothing in my argument has to do with
 probabilities.

Okay, I'll pose the same question a different way: do you think it is
more important to avoid false positives (allow harmful content through)
than it is to avoid false negatives (block benign content) in the
absence of an explicit policy?

  This seems like a
 dangerous assumption.  All we are getting, in this case, is better
 consistency in behavior from CSP implementation-to-implementation, but not
 better security.
 
 Consistency between implementation is essential.  Mitigating important
 threats is also essential.  Nether is more important than the other.

I disagree.  I

Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Brandon Sterne
I'd like to take a quick step back before we proceed further with the 
modularization discussion.  I think it is fine to split CSP into 
modules, but with the following caveats:


1. Splitting the modules based upon different threat models doesn't seem 
to be the right approach.  There are many areas where the threats we 
want to mitigate overlap in terms of browser functionality.  A better 
approach, IMHO, is to create the modules based upon browser 
capabilities.  With those capability building blocks, sites can then 
construct policy sets to address any given threat model (including ones 
we haven't thought of yet).


2. The original goal of CSP was to mitigate XSS attacks.  The scope of 
the proposal has grown substantially, which is fine, but I'm not at all 
comfortable with a product that does not require the XSS protections as 
the fundamental core of the model.  I think if we go with the module 
approach, the XSS protection needs to be required, and any additional 
modules can be optionally implemented.  I propose that the default 
behavior for CSP (no optional modules implemented) is to block all 
inline scripts (opt-in still possible) and to use a white list for all 
sources of external script files.  The script-src directive under the 
current model serves this function perfectly and doesn't need to be 
modified.  (We can discuss how plugin content and CSS, which can be 
vectors for script, should be governed by this core XSS module.)


As a straw man, the optional modules could be:
  * content loading (e.g. img-src, media-src, etc.)
  * framing (e.g. frame-src, frame-ancestors)
  * form action restriction
  * reporting (e.g. report-uri)
  * others?

I'm definitely not opposed to splitting apart the spec into modules, 
especially if it helps other browser implementers move forward with CSP. 
 I REALLY think, though, that the XSS protections need to be part of 
the base module.


Thoughts?

-Brandon


On 10/22/2009 09:37 AM, Adam Barth wrote:

On Thu, Oct 22, 2009 at 8:58 AM, Mike Ter Louwmter...@uic.edu  wrote:

I've added a CSRF straw-man:

https://wiki.mozilla.org/Security/CSP/CSRFModule

This page borrows liberally from XSSModule.  Comments are welcome!


Two comments:

1) The attacker goal is very syntactic.  It would be better to explain
what the attacker is trying to achieve instead of how we imagine the
attack taking place.

2) It seems like an attacker can easily circumvent this module by
submitting a form to attacker.com and then generating the forged
request (which will be sent with cookies because attacker.com doesn't
enables the anti-csrf directive).

Adam

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-15 Thread Brandon Sterne
On 07/30/2009 07:06 AM, Gervase Markham wrote:
 On 29/07/09 23:23, Ian Hickson wrote:
   * Combine style-src and font-src
 
 That makes sense.

I agree.  @font-face has to come from CSS which is already subject to
style-src restrictions.  I don't think there are any practical attacks
we are preventing by allowing a site to say style can come from foo
but not fonts.  I propose we combine the two directives and will do so
if there aren't objections.

Separately, there is another style-src related problem with the current
model [1]:

style-src restricts which sources are valid for externally linked
stylesheets, but all inline style is still allowed.  The current model
offers no real protection against style injected by an attacker.  If
anything, it provides a way for sites to prevent outbound requests
(CSRF) via injected link rel=stylesheet tags.  But if this is the
only protection we are providing, we could easily have stylesheets be
restricted to the allow list.

I think we face a decision:
A) we continue to allow inline styles and make external stylesheet loads
be subject to the allow policy, or
B) we disallow inline style and create an opt-in mechanism similar to
the inline-script option [2]

IOW, we need to decide if webpage defacement via injected style is in
the treat model for CSP and, if so, then we need to do B.

Thoughts?

-Brandon

[1] https://wiki.mozilla.org/Security/CSP/Spec#style-src
[2] https://wiki.mozilla.org/Security/CSP/Spec#options
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-08-10 Thread Brandon Sterne
On 8/10/09 10:27 AM, TO wrote:
 I'd like to ask again to
 see some real-world policy examples.  I suggested CNN last time, but
 if something like Twitter would be an easier place to start, maybe we
 could see that one?  Or see the example for mozilla.org, maybe?  Or
 even just some toy problems to start, working up to real-world stuff
 later.

Working examples will be forthcoming as soon as we have Firefox builds
available which contain CSP.  Absent the working builds, do you think
it's valuable for people to compare page source for an existing popular
site and a CSP-converted version?

 I'm asking for a reason: I think the process of trying to determine
 good policy for some real sites will give a lot of insight into where
 CSP may be too complex, or equally, where it's unable to be
 sufficiently precise.  And it provides a bit of a usability test:
 remember that initially, many people wanting to use CSP will be
 applying it to existing sites as opposed to designing sites such that
 they work well with CSP.
 
 People will want examples eventually as part of the documentation for
 CSP because, as has been pointed out, they're more likely to cut and
 paste from these examples than to generate policy from scratch.  So
 let's see what sort of examples people will be cutting and pasting
 from!
 
  Terri
 
 PS - Full Disclosure: I'm one of the authors of a much simpler system
 with similar goals, called SOMA: http://www.ccsl.carleton.ca/software/soma/
 so obviously I'm a big believer in simpler policies.  We presented
 SOMA last year at ACM CCS, so I promise this isn't just another system
 from some random internet denizen -- This is peer-reviewed work from
 professional security researchers.

I read through your ACM CCS slides and the project whitepaper and SOMA
doesn't appear to address the XSS vector of inline scripts in any way.
Have I overlooked some major aspect of SOMA, or does the model only
provide controls for remotely-included content?

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Brandon Sterne
On 7/16/09 8:17 PM, Ian Hickson wrote:
 On Thu, 16 Jul 2009, Daniel Veditz wrote:
 Ian Hickson wrote:
 * The more complicated something is, the more mistakes people will 
 make.
 We encourage people to use the simplest policy possible. The additional 
 options are there for the edge cases.
 
 It doesn't matter what we encourage. Most authors are going to be using 
 this through copy-and-paste from tutorials that were written by people who 
 made up anything they didn't work out from trial and error themselves.

Dan's point is absolutely true.  The majority of sites will be able to
benefit from simple, minimal policies.  If a site hosts all its own
content then a policy of X-Content-Security-Policy: allow self will
suffice and will provide all the XSS protection out of the box.  I tend
to think this will be the common example that gets cut-and-pasted the
majority of the time.  Only more sophisticated sites will need to delve
into the other features of CSP.

Content Security Policy has admittedly grown more complex since it's
earliest design but only out of necessity.  As we talked through the
model we have realized that a certain about of complexity is in fact
necessary to support various use cases which might not common on the
Web, but need to be supported.

 I believe that if one were to take a typical Web developer, show him 
 this:

X-Content-Security-Policy: allow self; img-src *;
   object-src media1.com media2.com;
   script-src trustedscripts.example.com

 ...and ask him does this enable or disable data: URLs in embed or 
 would an onclick='' handler work with this policy or are framesets 
 enabled or disabled by this set of directives, the odds of them 
 getting the answers right are about 50:50.
 Sure, if you confuse them first by asking about disabling. 
 _everything_ is disabled; the default policy is allow none. If you ask 
 What does this policy enable? the answers are easier.
 
 I was trying to make the questions neutral (enable or disable). The 
 authors, though, aren't going to actually ask these questions explicitly, 
 they'll just subconsciously form decisions about what the answers are 
 without really knowing that's what they're doing.

I don't think it makes sense for sites to work backwards from a complex
policy example as the best way to understand CSP.  I imagine sites
starting with the simplest policy, e.g. allow self, and then
progressively adding policy as required to let the site function
properly.  This will result in more-or-less minimal policies being
developed, which is obviously best from a security perspective.

 data URLs? nope, not mentioned
 inline handlers? nope, not mentioned
 
 How is an author supposed to know that anything not mentioned won't work?
 
 And is that really true?
 
X-Content-Security-Policy: allow *; img-src self;
 
 Are cross-origin scripts enabled? They're not mentioned, so the answer 
 must be no, right?
 
 This isn't intended to be a gotcha question. My point is just that CSP 
 is too complicated, too powerful, to be understood by many authors on the 
 Web, and that because this is a security technology, this will directly 
 lead to security bugs on sites (and worse, on sites that think they are 
 safe because they are using a Security Policy).

I don't think your example is proof at all that CSP is too complex.  If
I were writing that policy, my spidey senses would start tingling as
soon as I wrote allow *.  I would expect everything to be in-bounds at
that point.  This is a whitelist mechanism after all.

X-Content-Security-Policy: allow https://self:443
 Using self for anything other than a keyword is a botch and I will 
 continue to argue against it. If you mean myhost at some other scheme 
 then it's not too much to ask you to spell it out. I kind of liked 
 Gerv's suggestion to syntactically distinguish keywords from host names, 
 too.
 
 The examples I gave in the previous e-mail were all directly from the 
 spec itself.

I also agree that this example is awkward.  In fact, the scheme and port
are inherited from the protected document if they are not specified in
the policy, so this policy would only make sense if it were a non-https
page which wanted to load all its resources over https.

I don't feel strongly about keeping that feature.  Perhaps we should
allow self to be used not-in-conjunction with scheme or port as Dan says.

 ...I don't think a random Web developer would be able to correctly 
 guess whether or not inline scripts on the page would work, or whether 
 Google Analytics would be disabled or not.
 Are inline scripts mentioned in that policy? Is Google Analytics? No, so 
 they are disabled.
 
 _I_ know the answer. I read the spec. My point is that it isn't intuitive 
 and that authors _will_ guess wrong.

Sorry, but I think this is also weak evidence for too much complexity.
This is a whitelist technology so if a source isn't whitelisted, it
won't be 

Re: Content Security Policy - final call for comments

2009-06-30 Thread Brandon Sterne
(copying the dev-security newsgroup)

Hi Ignaz,

Thanks for the feedback.  The spoofed security indicators from an
injected CSP meta tag is a fair point and one I haven't thought of
previously.  I'm not sure if browsers will implement such visual
indicators for CSP because it may confuse users.  This is still a valid
point, though, and we've struggled with the idea of meta tag policy
from the beginning.  The idea is to enable sites which can't set headers
to use CSP, but the reward might not be worth the risk.  In fact, Sid,
one of the engineers implementing CSP has proposed removing this from
the design:
http://blog.sidstamm.com/2009/06/csp-with-or-without-meta.html

If there are no major objections to doing so, it looks like you'll get
your way :-)

Cheers,
Brandon


ignazb wrote:
 Hello,
 
 I just read some of the documentation about CSP and I must say it
 looks promising. However, I think there are some flaws in the spec.
 -) I think it is a bad idea to allow the use of a meta tag for CSP
 policy-declaration. If, for example, you decided to show a symbol in
 the browser that indicates that the site is CSP secured, it would not
 be possible to tell whether the CSP policy comes from the server via a
 HTTP header or from an attacker who just injected it (unless, of
 course, you display where the CSP policy came from). So if a user
 visits a site and sees it is CSP secured (although an attacker
 inserted the tag allowing the execution of scripts from his site) she
 could decide to turn on JavaScript although the site is inherently
 unsafe.
 -) There should probably also be a way to restrict the contents of
 meta tags in a website. If, for example, an attacker inserts a meta
 for a HTTP redirect, he could redirect users to his own website, even
 with CSP enabled.
 
 -- Ignaz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy discussion (link)

2009-06-29 Thread Brandon Sterne
Gervase Markham wrote:
 On 26/06/09 22:42, Bil Corry wrote:
 http://www.webappsec.org/lists/websecurity/archive/2009-06/msg00086.html
 
 The linked blogpost suggests using the page itself as an E4X document to
 bypass the restrictions. Dead clever :-) Should we say that CSP also
 requires the external JS files to be served with the right Content Type?
 (application/javascript)? That would reduce the possibility of the
 attacker using random content they've managed to create on the remote
 server as a script file.
 
 Gerv

That is clever.  Yes, I think you're right that we should enforce a
valid MIME type for the external script files.  We probably also want to
whitelist application/json for sites utilizing JSON feeds.

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: XSRF via CSP policy-uri

2009-06-24 Thread Brandon Sterne
Serge van den Boom wrote:
 Hi,
 
 If I'm not mistaken, there is a hypothetical situation where CSP can be
 used to the benefit of an attacker. Consider the scenario where:
 * the website contains a stored header injection vulnerability,
 * the website contains a XSRF vulnerability, and
 * the web client supports CSP.

So the premise is that the site already has a CSRF vuln and a header
injection vuln, and Content Security Policy provides a new way for an
attacker to forge a request from the victim to the target site.

 To exploit a XSRF vulnerability, an attacker needs some way to direct
 the web client to the vulnerable URL. This usually requires a social
 engineering attack or a XSS vulnerability. A (stored) header injection
 vulnerability is generally not enough.
 
 However, by injecting an X-Content-Security-Policy header with the
 policy-uri set to the vulnerable URL, the web client can be tricked into
 visiting the vulnerable URL.

How did the attacker get the victim to visit the URL with the header
injection vuln in the first place?  If the attacker could get this far,
they could skip the CSP step altogether and have the victim go straight
to the CSRF URL.

Given the numerous ways to initiate a GET to a particular URL, I don't
believe CSP adds any significant new attack surface with the policy-uri
directive.  The attack scenario above also requires massive existing
vulnerabilities in the victim site, which Serge points out up front.

The report-uri, however, does add a small twist.  The report sent by the
browser to the report-uri is a POST.  I suppose this is a new way for an
attacker to direct a POST at a CSRF vuln.  However, the attacker will
have no control over the POST body, only the URL.  We can look into
removing cookies and auth headers from the report request (not the
report body) to address this risk if it seems valuable.

Regards,
Brandon

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-15 Thread Brandon Sterne
On 4/15/09 1:32 AM, Gervase Markham wrote:
 Why does the CSP technology get to advertise and version itself in this
 way when no other technology the browser supports does? If we allow CSP
 to send version information in every HTTP request, what other
 technologies are going to want it? I support video. I support
 HTML5. Etc. I think the slippery slope argument has validity here.

The support of video or HTML5 by a client does not have the same
security implications as the support of CSP.  If a client does not
support video and a site serves it to them, there is no risk to the
client, which can passively ignore the video content.  If a client
does not support CSP and a site serves them untrusted content, there is
a higher XSS risk to that client than to one which does support CSP.

 Why not start versioning when we reach version 2 (i.e. there are two
 versions to distinguish), if that ever happens?

Another benefit of the version string that we've discussed is the
ability for a client to signal that CSP is disabled presently (by
removing the string).  In those cases, a site may want to restrict which
content is served to that client.

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-10 Thread Brandon Sterne
On 4/10/09 7:06 AM, Gervase Markham wrote:
 If sites are relying on CSP for XSS protection, then perhaps they would
 want to serve only trusted content to non-CSP users.
 
 If you have a mechanism for making content trusted, why not use it all
 the time? You don't turn off your HTML sanitizer for CSP-supporting
 browsers.

I think the point is that sites won't have 100% confidence in their HTML
sanitizer.  The HTML scrubber might have bugs, which CSP provides
mitigation for.  This raises the confidence level to a point where sites
can be comfortable serving user-generated content, etc. because they
know there are policies limiting what that content can do.

 In reality, as CSP becomes more mature and well-understood, sites will
 rely on it for XSS mitigation.  It's inevitable that if we put a
 reliable product out there sites will rely upon it.
 
 But by design, it can't be entirely reliable, because it can't read the
 developer's mind. Or have you got the ESP module working properly now? :-)

Not reliable in the sense that we guarantee there will never be XSS in
your site.  I site can still write code with vulnerabilities even under
CSP.  By reliable, I meant that the behavior will be consistent and
patterns of effective use for XSS mitigation will develop.

 We're somewhat averse to
 adding a request header that would only carry the version info, so
 that's why we're looking for an existing request header that can carry
 this info.
 
 I really don't think UA is the right choice. Microsoft are bloating UAs
 with .NET versions, and that's making people unhappy.

I'm not 100% thrilled with the idea either, mostly because parsing the
U-A string could be challenging for some sites.  But it does seems to be
the least bad idea I've heard.  We can certainly minimize U-A bloat by
making our subproduct something like CSP/1.  I'm certainly open to
other suggestions, though.

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-08 Thread Brandon Sterne
On 4/8/09 12:07 PM, Gervase Markham wrote:
 On 07/04/09 18:02, Brandon Sterne wrote:
 1. Bugs may be present in the CSP design which require future
 compatibility breakage.  These obviously cannot be foreseen and, though
 we desire it, we can't guarantee forward compatibility.
 
 There are two sorts of possible breakage - syntax and functional. I
 can't see us needing to throw away the syntax and, if we did, we'd just
 define a new header. So no issues there. And functional breakage comes
 into your second category anyway.

Defining a new header seems like a non-starter to me.  We are going to
be hard-pressed to get one new header standardized, so throwing one away
seems very wasteful.

 3. We arguably want to have a pref for users to turn off CSP (for
 testing or otherwise).  It would be useful to have the version number
 available as a means to communicate to the site that, even though the
 client supports CSP by default, CSP has been disabled on this client.
 
 Why is that useful information?

If sites are relying on CSP for XSS protection, then perhaps they would
want to serve only trusted content to non-CSP users.

 I'm actually against making it easy for servers to detect if CSP is
 supported, because if we make it particularly easy, content authors will
 start relying on it as their only defence rather than using it as a
 backup. We don't need to check for XSS holes, we use CSP. That would
 be bad. Of course, we can't stop them putting together fragile
 User-Agent lists, but sites which do that are broken anyway, as the web
 design community has been saying for years.

In reality, as CSP becomes more mature and well-understood, sites will
rely on it for XSS mitigation.  It's inevitable that if we put a
reliable product out there sites will rely upon it.  CSP won't cause
input sanitization, etc. to be removed from Security Best Practices, but
it will be a standard part of the browser security model, I imagine.

 I looked at each of the HTTP Header Field Definitions and my preference
 for communicating the CSP version is to add a product token [1] to the
 User-Agent [2] string.  This would add only a few bytes to the U-A and
 it saves us the trouble of having to go through IETF processes of
 creating a new request header.
 
 I'd much rather have a \d+; at the start of the header. Missing
 implies version 1.

But our header is only sent as a response header, so would not be useful
for sending version info with client requests.  We're somewhat averse to
adding a request header that would only carry the version info, so
that's why we're looking for an existing request header that can carry
this info.

-Brandon

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Brandon Sterne
On 4/7/09 4:25 AM, Gervase Markham wrote:
 What's the story on inline style and style=? At the moment the
 definition of style-src says they are subject to it, but there's no
 valid value for in this document, and in the script case, all inline
 script is disabled.

As you mentioned, the style-src section indicates ...as well as inline
style elements and style attributes of HTML elements.  We are
basically treating CSS in the same manner as JavaScript.

 Have we decided that there's a risk with all inline CSS style, or can we
 define and enforce a large safe subset of the language? Making people
 move their JS to external files is one thing, making them move all the
 style as well is yet another.

Since style is a vector for JavaScript, via XBL, it needs to be subject
to the same restrictions.

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Brandon Sterne
On 4/6/09 11:36 PM, Daniel Veditz wrote:
 allow is not mandatory, but if missing it's assumed to be allow
 none. If you explicitly specify the whitelisted hosts for each type of
 load you might not need or want a global fallback which could only be
 used to sneak through types you hadn't thought about. Future browser
 features, for instance.

Not according to our proposed spec:
https://wiki.mozilla.org/Security/CSP/Spec#Directives
http://people.mozilla.org/~bsterne/content-security-policy/details.html#allow

See comments from me and Sid from yesterday explaining why allow is
required.

I somewhat agree with the spirit of Dan's comment.  If allow is not
specified, then the _effect_ is to allow none, because the policy is
invalid and CSP will fail closed.  However, strictly speaking, we don't
assume allow none if it isn't specified.  We will treat that as invalid
policy, logging an error, and not loading any of the content types.

By falling back to allow none when invalid policy is sent, websites
will know right away that their pages are broken because no content,
other than textual elements will load.  This is a more secure option
than failing open and having websites potentially believe their users
are protected.

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Brandon Sterne
On 4/7/09 4:07 AM, Gervase Markham wrote:
 I much prefer forwardly-compatible designs to version numbers. I think
 the current design is forwardly-compatible, as long as we maintain a
 well-signposted public page listing which category all sorts of request
 fall into, and add new request types well before they get implemented by
 anyone.
 
 For example, if a 3dvideo tag, for which you needed red-blue glasses,
 made it into a draft HTML5 spec, we would decide and say loudly that
 this was included in media-src well before anyone actually implemented
 it.
 
 Can you suggest a scenario in which version numbers would help?

I think the case for including a version number goes something like this
(and strong advocates, please chime in if I miss something):

1. Bugs may be present in the CSP design which require future
compatibility breakage.  These obviously cannot be foreseen and, though
we desire it, we can't guarantee forward compatibility.

2. New types of content (per your example) or new web APIs may be added
in the future which don't shoehorn nicely into one of our current policy
buckets.  If we have to add another policy directive in the future, then
it will violate the policy syntax in older versions which will cause
them to fail closed (according to the current design).

3. We arguably want to have a pref for users to turn off CSP (for
testing or otherwise).  It would be useful to have the version number
available as a means to communicate to the site that, even though the
client supports CSP by default, CSP has been disabled on this client.

I looked at each of the HTTP Header Field Definitions and my preference
for communicating the CSP version is to add a product token [1] to the
User-Agent [2] string.  This would add only a few bytes to the U-A and
it saves us the trouble of having to go through IETF processes of
creating a new request header.

Thoughts?

-Brandon

[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.8
[2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.43


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Brandon Sterne
On 4/7/09 9:08 AM, Brandon Sterne wrote:
 Have we decided that there's a risk with all inline CSS style, or can we
 define and enforce a large safe subset of the language? Making people
 move their JS to external files is one thing, making them move all the
 style as well is yet another.
 
 Since style is a vector for JavaScript, via XBL, it needs to be subject
 to the same restrictions.

Actually, my reasoning is wrong here.

Style is no longer a vector for script under CSP because we added the
restriction that XBL bindings must come from chrome: or resource: URIs
for precisely this reason.

The other reason to make inline CSS subject to the style-src directive
(which I didn't state before because it didn't seem as strong a point)
is increased consistency in the model.  It seems inconsistent to offer
controls on where style can come from if the restriction can be bypassed
by injecting CSS directly into the document.  Granted, injected CSS
poses a much, much lower risk than injected script, but there is still
the issue of page defacement, etc.

I don't think the no-inline-style requirement is too punitive, though,
as sites can still use normal CSS selectors and apply their styles from
external, white-listed stylesheets.

Sorry for the confusion.

-Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-06 Thread Brandon Sterne
Hi, Gerv.  Thanks a lot for your comments.  I'll address the comments
that weren't already covered by Johnathan or Sid, both of whom I agree
with.

On Apr 6, 3:56 am, Gervase Markham g...@mozilla.org wrote:
 Are we expecting to see some or all of this in Firefox 3.5, or Firefox-next?

Firefox-next.

 - but a declared (unexpanded) policy always has the allow directive.
 I think you need to make it more clear that allow is mandatory. But
 what was the logic behind making it so? Why not assume allow *, which
 is what browsers do in the absence of CSP anyway?

Sid did address this one, but I want to be clear in the rationale.
Once we see the Content Security Policy header (or meta tag), we want
to force sites to be explicit about what they are allowing.  Yes,
allow * is the default browser behavior without CSP presently, but
we want to avoid cases where sites assume the default behavior of CSP
is more restrictive than it actually is.  I could envision, for
example, a site presuming that allow none or allow self was the
default, and that additional policy could be specified from there.  If
a site really wants to allow *, then we want them to explicitly
state that.

 And the other document
 http://people.mozilla.org/~bsterne/content-security-policy/details.html:

 - policy-uri documents must be served with the MIME type
 text/content-security-policy to be valid This probably needs an x-
 until we've registered it, which we should do before deployment. It's
 not a complex process, I hear.

That sounds fair.  I'll update the document with that change.

 - Hostname, including an optional leading wildcard, e.g. *.mozilla.org
 Does that include foo.bar.baz.mozilla.org? If so, we should say so
 explicitly (in both docs).

That's true too.  I'll make the language more clear.

Cheers,
Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Content Security Policy - final call for comments

2009-04-02 Thread Brandon Sterne
Hello all,

We have been working hard lately to finish documenting the Content
Security Policy proposal, which we plan to start implementing very
soon.  For those of you who have followed the progression of CSP, you
have seen the model grow quite a bit in complexity.  As one thinks
through the CSP model, it becomes clear that a certain amount of
complexity is in fact necessary for the model to be useful.  I have
done my best to describe the model and provide justification for the
various restrictions here:
http://people.mozilla.org/~bsterne/content-security-policy/details.html

We now have a specification document to work from (thanks, Sid!) and
it and other supporting docs can be found on the Mozilla Wiki:
https://wiki.mozilla.org/Security/CSP/Spec

If you have feedback that you would like to share regarding Content
Security Policy, please do so ASAP as the window for making changes to
the model will soon be closing.

Cheers,
Brandon
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security