[putting on my IMHO hat]
The threat-model centric approach won't work in the long run for a few
reasons:
a) threat models change over time.. nobody was even talking about
clickjacking until a few years ago, so had we designed this previously
we would have put frame controls under some other category (maybe
content loading)
b) API's change over time so they can drift between threat
groupings... frames are now a potential XSS concern due to postMessage
So now we're looking at framing being potentially controlled by
several different "modules": maybe clickjacking, CSRF, content
importing, and XSS
CSS is content importing.. oh but IE allows CSS "expressions" so its a
XSS vector too.
Threats just don't map to API's in a remotely neat 1 to many
relationship. Threat models are also too volatile. Which isn't to
say that we haven't been closely considering specific threat models
when designing CSP, because we have been. But we intentionally did
not build the actual directive names around the threats du jour.
If we are concerned about communication and documentation then we
should focus on that problem. Designing a security model primarily
around its digestive properties is the wrong goal IMHO. We have lots
of time to figure out the best ways of communicating, but we'll be
stuck with whatever implementation we come up with initially for a
long time. That's the nature of policy-centric mechanisms.. the
mechanism should be a generalized method for enforcing policies, and
then the effort is on developing sets of policies that achieve
particular goals, and with sufficient iteration and testing go on to
become common deployment patterns. Trying to bake those in from the
get-go seems to assume perfect knowledge of current and future threats.
So a good exercise might be, for a given threat (CSRF, clickjacking,
whatever), to document the specific threat model then see whether it
could be addressed via the current proposal (https://wiki.mozilla.org/Security/CSP/Spec
). Maybe that would help us develop the documentation and
specifications necessary to better explain the value of the model.
Of course, this is all just MHO. :)
Lucas.
P.S. Regarding making XSS a mandatory and opt-out module, those might
be related things. I think the question is really, when a given user
agent or website states that they have deployed CSP, does that really
mean anything in a concrete sense?
On Oct 22, 2009, at 2:35 PM, Adam Barth wrote:
See inline.
On Thu, Oct 22, 2009 at 2:22 PM, Brandon Sterne
<bste...@mozilla.com> wrote:
I'd like to take a quick step back before we proceed further with the
modularization discussion. I think it is fine to split CSP into
modules,
but with the following caveats:
1. Splitting the modules based upon different threat models doesn't
seem to
be the right approach. There are many areas where the threats we
want to
mitigate overlap in terms of browser functionality. A better
approach,
IMHO, is to create the modules based upon browser capabilities.
With those
capability building blocks, sites can then construct policy sets to
address
any given threat model (including ones we haven't thought of yet).
It's unclear to me which organization is better. I'd be in favor of
picking one and giving it a try.
2. The original goal of CSP was to mitigate XSS attacks.
I agree that XSS mitigation is the most compelling use case for CSP.
The scope of the
proposal has grown substantially, which is fine, but I'm not at all
comfortable with a product that does not require the XSS
protections as the
fundamental core of the model. I think if we go with the module
approach,
the XSS protection needs to be required, and any additional modules
can be
optionally implemented.
I'm not sure it matters that much whether we label the XSS mitigations
"recommended" or "required." I suspect every browser vendor that
implements CSP will implement them. If you'd prefer to label them
required, I'm fine with that.
I propose that the default behavior for CSP (no
optional modules implemented) is to block all inline scripts (opt-
in still
possible) and to use a white list for all sources of external
script files.
This is a separable issue. I'm not sure whether it's better to opt-in
or opt-out of this behavior. Opting-in makes policy combination
easier to think about (the tokens just accumulate).
I'd prefer if sites had to opt-in to the block-eval behaviors because
I suspect complying with those directives will require substantial
changes to sites.
The script-src directive under the current model serves this
function
perfectly and doesn't need to be modified. (We can discuss how
plugin
content and CSS, which can be vectors for script, should be
governed by this
core XSS module.)
That depends on whether we decide opt-in or opt-out is better for
controlling inline script and eval-like APIs.
As a straw man, the optional modules could be:
* content loading (e.g. img-src, media-src, etc.)
* framing (e.g. frame-src, frame-ancestors)
* form action restriction
* reporting (e.g. report-uri)
* others?
I'd but frame-src in with content loading, but otherwise this seems
fine.
I'm definitely not opposed to splitting apart the spec into modules,
especially if it helps other browser implementers move forward with
CSP. I
REALLY think, though, that the XSS protections need to be part of
the base
module.
I don't think it matters that much whether the XSS mitigations are
part of the base module or whether they're in a separate
required/recommended module. I think the main issue here is making
the spec easy to read.
Adam
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security