Re: B2G Threats/Controls

2012-03-06 Thread Adam Barth
I won't be able to make the call, but I've left one comment inline:

On Tue, Mar 6, 2012 at 10:15 PM, ptheriault ptheria...@mozilla.com wrote:
 Chris,

 Below is a summary of threats and controls for further discussion. 
 Disclaimer: this is my understanding from various conversations, wiki pages, 
 bugs and IRC chats, so it's rough, probably varies from whats implemented (or 
 what the final goals are), but its a starting point. Ultimately the aim is a 
 considered security model with security controls commensurate to the threats 
 posed, but to start with I want to start a discussion regarding what threats 
 have been considered so far in the project, and the current thoughts around 
 what controls are in place or planned. I know that a lot of the things below 
 have already been considered in the project prior to me coming on board so 
 the main aim for me is to run through the state of things, and start 
 capturing the B2G security model.

 Firstly I wanted to document some assumptions:

 - All B2G applications will be Web Apps - there is no such thing as native 
 applications. Some web apps may be treated differently though based on 
 security characteristics (e.g. Came shipped with the phone, loaded from 
 trusted location, local access only, served over SSL only etc)
 - B2G will ship with a number of included Web Apps (Gaia) to handle all the 
 tasks of running the device. These could potentially be replaced by the user 
 with alternatives
 - Web APIs will provide access to Web Apps so they can perform their desired 
 role. E.g. An app acting as a Dialer will need access to the Web Telephony 
 API, in order to to make telephone calls
 - Access to sensitive APIs will be controlled by permissions

 Threats Summary
 =
 In order to discuss the security measures built into B2G, we need to discuss 
 the threats posed.The following is a high level list of threats for further 
 discussion:

 - Platform Vulnerabilities
        - Exploit gives control of content process
        - Attack against other processes (media server, rild etc)?
 - Malicious Web App
        - User tricked installing malicious application
        - Legitimate app is compromised at the network layer
        - Compromised web app server for Web Apps hosted remotely
 - Vulnerable Web App
        - Web application security threats (XSS, SQLi, etc)

^^^ One way to address this threat is to require that B2G apps have a
Content-Security-Policy that meets some minimum bar.  Chrome has
started doing this with its extensions and packaged apps (see
http://blog.chromium.org/2012/02/more-secure-extensions-by-default.html).
 You might want to do something similar.

Adam


        - Framework weaknesses (same-origin bypass, privilege escalation, 
 abuse communication between apps?)
 - Lost device
        - User data compromised

 Controls Summary
 =
 In response to the above threats, what controls to we want to consider? The 
 following controls have come up in  various discussions. There are probably 
 other controls to be discussed/raised/captured as well.

 - Low-privileged processes and process sandboxing
        - Web Apps will run in low-privileged process? How is this implemented?
        - Segregation between less trusted and more trusted Web App
        - What about other processes (media server, rild, etc)
 - Hardening of the underlying OS
        - I've not seen any mention of this, but it has come up in security 
 discussions.
 - Segregation between Web Apps
        - Separate by domain, other restrictions?
 - Restriction for Apps trusted with sensitive permissions
        - local-only sandbox, network-only sandbox etc
        - restrict cross-domain communication etc
        - load only over SSL
        - load only from trusted developer
        - load once, then cache. Drop permissions if app changes (or prevent 
 from changing without update workflow)
 - Restricting permissions and protection against privilege escalation
 - Security restrictions for special Web App cases
        - browser app (iframe browser,  other cases?)
        - permissions manager
        - others?
 - Data Protection
        - Encryption
        - Login

 Discussion is 5pm (PST) tomorrow (7th) using same phone conference as b2g 
 meeting.

 Cheers,

 Paul
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] Permissions model thoughts

2012-03-05 Thread Adam Barth
On Mon, Mar 5, 2012 at 12:45 PM, Jim Straus jstr...@mozilla.com wrote:
 Hello -
  I definitely don't like the Android model.  We'll have to figure out
 exactly how to communicate permissions requests to users.  On the other
 hand, an appropriately vetted and signed app could be given permissions
 implicitly in a permissions manifest, so the user doesn't need to deal with
 it.  Also, some kind of heuristics may make it possible for the permissions
 manager to deal with things internally, again not bothering the user.  These
 are areas that need thought and experimentation.

There's been a bunch of research on the Android permission model in
academia, including a bunch of suggestions for how to do better.  If
you'd like, I'd be happy to connect you with the folks who've studied
this topic (off-list).

  We will definitely need some sort of identification for web apps and sites.
  Origin was the first thought, but if you have further suggestions, please
 post.  Maybe for network loaded apps/sites, permissions will need to be
 re-goten from the user each time they are loaded (helps with, but doesn't
 completely cure hacked sites).  Maybe locally cached apps could keep their
 permissions until they are re-installed.

One thing that has worked well for packaged apps in Chrome is to use a
public key in the URL to identify local content.  For example:

b2g-or-whatever://ankgjoopnopeoeljehjkighfcfefalcg/path/inside/package.html

where ankgjoopnopeoeljehjkighfcfefalcg is a public key and
/path/inside/package.html is a path inside a zip archive self-signed
with ankgjoopnopeoeljehjkighfcfefalcg.

This model is decentralized and provides a solid, secure foundation.
It also plays well with the usual same-origin model for web security.
I'm happy to answer any questions you have about Chrome's experience
with this approach.

Adam


 On 3/5/12 1:25 PM, Lucas Adamski wrote:

 I like this proposal at a high level, it provides for a lot of
 flexibility.  What I like about a permission model that can prompt at
 runtime is that is makes some permissions optional.  On Android many free
 apps require geolocation purely for advertising targeting, requiring the
 user to trade their privacy for functionality.  Those same apps, on iPhone,
 run just fine if the user denies them geolocation privileges.  These
 decisions could be remembered for some finite amount of time (30 days), or
 indefinitely if you provide the user with the ability to manage them
 directly.

 The downside of prompting at runtime is that it does need to map to
 permissions an average user could hope to understand (hello user research?)
  Asking for location, contacts, etc seems like a reasonable question to ask
 the user.  Asking the user to change the network proxy settings or fiddle
 with your email settings, maybe less so.  The latter might be better bundled
 into a general scary system access category.

 The origin problem is still a tricky one.  Browsers overall still rely on
 same origin as the only meaningful security boundary (including work on
 iframe sandbox, CSP, etc.). I'm still skeptical though that alone is
 sufficient to authenticate apps that would have system access.  A web
 server is generally a much easier thing to compromise than a code signature.

 Adding dev-security for more brains.
   Lucas.



 On Mar 3, 2012, at 2:44 PM, Jim Straus wrote:

 Redirecting to the public list and to the webapps team, with some
 additions...

 Hello all -
  I've been thinking about what the permissions model of b2g might look
 like, so I'm putting down my ideas to solicit feedback and see if we can get
 this going.  A lot of the components and services we've been building are
 all waiting on a permissions design.

 Permissions in b2g are needed for access to services and data that is the
 user might not want to allow an app to access.  Examples include the users
 current location (geolocation), contacts, making phone calls, maybe even
 cellular data connections (if the user wants to control their costs), large
 storage areas, etc.  We'll have to decide the entire list as we go along.

 I envision two user interfaces needed, the dialog/bar/whatever that asks
 the user for permission, and a permissions manager app where the user can
 revoke permissions, a priori grant a permission to all apps for a component
 (like always allow cellular data connections if the user has unlimited
 data), and to see what permissions exist and what apps have been granted
 permission. The permission request user interface would allow for the user
 to grant/deny permission once or permanently.  If permission is not granted
 permanent, the next time the service/component is invoked permission would
 be asked for again.  I also would like to make at least the permissions
 manager be able to be changed to a non-built in app, if we feel we can
 maintain security.

 The generally discussed model is that permissions are requested as a
 feature is used, on a feature by feature basis (unlike 

Re: measuring use of deprecated web features

2012-02-15 Thread Adam Barth
DOMMutationEvents :)

Adam


On Tue, Feb 14, 2012 at 2:34 PM, Jesse Ruderman jruder...@gmail.com wrote:
 What rarely-used web features are hurting security? What might we
 remove if we had data on prevalence?

 https://etherpad.mozilla.org/MeasuringBadThings
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP and object URLs

2011-07-26 Thread Adam Barth
On Tue, Jul 26, 2011 at 5:19 PM, Daniel Veditz dved...@mozilla.com wrote:
 On 7/22/11 7:18 PM, Eli Grey wrote:
 CSP needs a way to support object URLs, of which the scheme is
 implementation specific (e.g. moz-filedata:{GUID} in Firefox,
 blob:{origin}{GUID} in WebKit). How might this be accomplished?

 This is a better conversation for public-web-secur...@w3.org where
 we're working on standardizing CSP -- added with a CC though this
 conversation is likely to fork.

 Off the top of my head I think we should treat those as coming from
 'self' since the data is ultimately available to the page and under
 its control.

 If that doesn't work another option is to treat them similarly to
 data: urls: block them unless explicitly allowed and let them be
 whitelisted by scheme alone.

Please feel encouraged to test the behavior in WebKit, but I believe
we treat them as 'self' because they're treated as same-origin
everywhere else (e.g., also for XMLHttpRequest).

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev [and WebSockets in FF6]

2011-06-08 Thread Adam Barth
On Wed, Jun 8, 2011 at 8:40 AM, Christopher Blizzard
blizz...@mozilla.com wrote:
 On 6/7/2011 5:52 PM, Adam Barth wrote:
 On Tue, Jun 7, 2011 at 5:43 PM, Brian Smithbsm...@mozilla.com  wrote:
 Adam Barth wrote:
 On 5/31/2011 8:24 AM, Brian Smith wrote:

 We have also discussed blocking https+ws:// content completely in
 our
 WebSockets implementation, so that all WebSockets on a HTTPS page
 must be
 wss://. That way, we could avoid making mixed content problems any
 worse.

 Do you have a bug on file for that yet?

 If you'd be willing to file a bug at bugs.webkit.org too (and CC me),
 I can help make sure WebKit and Firefox end up with the same behavior
 here.

 Bugzilla Bug 662692
 Chromium Issue 85271
 WebKit Issue 62253

 I wasn't sure which email address to use to CC you to the Chromium and
 WebKit bugs.

 Do we have consensus that this is something we want, both internally and
 externally?

It sounds like a good idea to me, but I'll need to talk with the folks
who work on WebSockets directly to make sure they're on board.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev [and WebSockets in FF6]

2011-06-07 Thread Adam Barth
On Tue, Jun 7, 2011 at 5:43 PM, Brian Smith bsm...@mozilla.com wrote:
 Adam Barth wrote:
  On 5/31/2011 8:24 AM, Brian Smith wrote:
 
  We have also discussed blocking https+ws:// content completely in
  our
  WebSockets implementation, so that all WebSockets on a HTTPS page
  must be
  wss://. That way, we could avoid making mixed content problems any
  worse.
 
  Do you have a bug on file for that yet?

 If you'd be willing to file a bug at bugs.webkit.org too (and CC me),
 I can help make sure WebKit and Firefox end up with the same behavior
 here.

 Bugzilla Bug 662692
 Chromium Issue 85271
 WebKit Issue 62253

 I wasn't sure which email address to use to CC you to the Chromium and WebKit 
 bugs.

Thanks!

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev [and WebSockets in FF6]

2011-05-31 Thread Adam Barth
On Tue, May 31, 2011 at 10:25 AM, Christopher Blizzard
blizz...@mozilla.com wrote:
 On 5/31/2011 8:24 AM, Brian Smith wrote:

 We have also discussed blocking https+ws:// content completely in our
 WebSockets implementation, so that all WebSockets on a HTTPS page must be
 wss://. That way, we could avoid making mixed content problems any worse.

 Do you have a bug on file for that yet?

If you'd be willing to file a bug at bugs.webkit.org too (and CC me),
I can help make sure WebKit and Firefox end up with the same behavior
here.

Thanks,
Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev

2011-05-18 Thread Adam Barth
[-dev-tech-crypto]

On Wed, May 18, 2011 at 6:17 AM, Jean-Marc Desperrier jmd...@gmail.com wrote:
 Brian Smith wrote:
 See https://twitter.com/#!/scarybeasts/status/69138114794360832:
 Chrome 13 dev channel now blocks certain types of mixed content by
 default (script, CSS, plug-ins). Let me know of any significant
 breakages.

 See

 https://ie.microsoft.com/testdrive/browser/mixedcontent/assets/woodgrove.htm
  IE9: http://tinypic.com/view.php?pic=11qlnhys=7
 Chrome: http://tinypic.com/view.php?pic=oa4v3ns=7

 IE9 blocks all mixed content by default, and allows the user to
 reload the page with the mixed content by pushing a button on its
 doorhanger (at the bottom of the window in IE).

 Notice that Chrome shows the scary crossed-out HTTPS in the address
 bar.

We tried aggressively blocking active mixed content by default in the
Chrome Dev channel, but too much broke.  We're going to unblock it
again and try to find some middle road.

Here's the bug tracking this issue:
http://code.google.com/p/chromium/issues/detail?id=81637

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev

2011-05-18 Thread Adam Barth
On Wed, May 18, 2011 at 12:04 PM, Eddy Nigg eddy_n...@startcom.org wrote:
 On 05/18/2011 09:45 PM, From Adam Barth:
 We tried aggressively blocking active mixed content by default in the
 Chrome Dev channel, but too much broke.  We're going to unblock it
 again and try to find some middle road.

 That's a shame and very regrettable. Together with IE9 you could have made a
 difference in order to pull over other browser vendors to do the same, which
 in turn would have put the pressure elsewhere (those that provide stuff to
 embed with their sites).

Indeed, which is why we experimented with a hard block.  Our plan is
to move in smaller steps, hopefully in coordination with other browser
vendors.

 IMO, mixed content breaks the security and concept entirely.

Not entirely, but often.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: NSS/PSM improvements - short term action plan

2011-04-09 Thread Adam Barth
On Fri, Apr 8, 2011 at 4:02 PM, Jean-Marc Desperrier jmd...@free.fr wrote:
 On 09/04/2011 00:52, Adam Barth wrote:

 - CA locking functionality in HSTS or via CAA

  There's significant interest in this feature from chrome-security
 as well.

 What about EV locking ?

 How does a site change CA after he's started enabling CA locking.
 Would you enable multiple CA locking so that he'd start by adding the new CA
 during a while when still using the old cert, and then hope for the best
 after making the switch ?

All good questions.  We're still in the experimental phase, so we
haven't worked out all the details yet.

Rather that CA pinning, specifically, we've been experimenting with
certificate pinning, with the approach that you can pin any
certificate in the chain.  For example, you can pin your leaf
certificate, or you pin your CA's certificate.  The only requirement
is that future certificate chains MUST include that certificate.  That
effectively gives you EV pinning, CA pinning, and leaf-certificate
pinning in one mechanism.

In addition to thinking about orderly transitions to new certificates
(as you mention), there's also the case of disorderly transitions.
For example, what happens if the site's private key gets compromised
and it wishes to move to a new certificate before it planned.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: NSS/PSM improvements - short term action plan

2011-04-09 Thread Adam Barth
On Sat, Apr 9, 2011 at 10:44 AM, Eddy Nigg eddy_n...@startcom.org wrote:
 On 04/09/2011 01:52 AM, From Adam Barth:
  There's significant interest in this feature from chrome-security
 as well.

 There is however a very limited benefit and would only prevent a particular
 type of failure if at all. The enforcement for it would have to be baked
 into the client software and adherence by CAs would have to be required by
 policy. I don't see that happening at the moment, specially because the
 benefit is fairly small for the hassle.

I'm reasonably sure that it will happen in some form or another given
the amount of interest on the part of browser implementors and web
site operators.

There's no dependencies on the CAs, as far as I understand.  Can you
explain what you think the CAs will need to adhere to?

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: NSS/PSM improvements - short term action plan

2011-04-08 Thread Adam Barth
On Fri, Apr 8, 2011 at 3:49 PM, Sid Stamm s...@mozilla.com wrote:
 After the few meetings and a couple of hours of discussion in the last
 two days, we've made a short list of desired upgrades for NSS/PSM for
 the near term.  This message should hopefully serve as a summary of the
 technical bits that -- based on the discussions -- seemed most urgent.

 Here they are, prioritized into three buckets:
 - A (things we want soonest)
 - B (things we want fairly soon)
 - C (things we want, but after A and B are done)

 Bucket A:
 - Move to libpkix for all cert validation (bug 479393)
 - Complete active distrust in NSS (bug 470994)
 - Implement callbacks to augment validation checking (bug 644640)
 - Implement subscription-based blocklisting of certs via update ping
 (remove need to ship patch)

 Bucket B:
 - Implement OCSP Stapling (bug 360420)
 - Implement date-based revocation (distrust certs after specific date)
 - CA locking functionality in HSTS or via CAA

 There's significant interest in this feature from chrome-security
as well.  We have a prototype implementation of the backend in Chrome
that you can drive through some UI, but we don't have any syntax for
turning it on from the network yet.  Let me know if you'd like to
discuss further.

Adam


 Bucket C:
 - Disable cert overrides for *very old* expired certs (might not be in
 any CRLs anymore)

 Cheers,
 Sid
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP - Cookie leakage via report-uri

2010-06-08 Thread Adam Barth
Another option is to store the report-uri information in the
well-known metadata store.  Of course, that assumes that the attacker
doesn't control the well-known metadata store...

Adam


On Tue, Jun 8, 2010 at 3:10 PM, Brandon Sterne bste...@mozilla.com wrote:
 Hello all,

 I want to bring up an issue that was raised regarding the proposed
 report-uri feature of Content Security Policy feature.

 If you assume the following two flaws are present on a legacy server:
 1. Attacker controls the value of the CSP header
 2. A request-processing script on the server which doesn't validate
  POST requests it receives but simply places the POST data in a
  location accessible to the attacker

 Then CSP introduces a new attack surface that can be used to steal
 cookies or other authentication headers.  #2 above seems rather
 contrived at first blush, but think of a Pastebin-type application that
 blindly processes POSTs into publicly available content.  (Pastebin
 itself is not vulnerable to this attack, since it validates the format
 of the POSTs).

 (Note that #1 doesn't require arbitrary HTTP response header injection
 or HTTP response splitting.  The attacker must control only the value of
 the policy header.)

 One we can address this is to suppress the value of any auth headers
 that were present in the violation-generating request from the report
 POST body.  This of course reduces the utility of the reports for server
 debugging, but does provide a guarantee that Cookie and related
 information won't ever be leaked to attackers through the reports.

 Does this sound like the right approach?

 Cheers,
 Brandon

 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Paper: Weaning the Web off of Session Cookies

2010-01-31 Thread Adam Barth
On Sun, Jan 31, 2010 at 4:50 PM, Chris Hills c...@chaz6.com wrote:
 On 31/01/2010 18:12, Timothy D. Morgan wrote:
 That's handy, but doesn't that mean the website you're accessing will
 still use cookies once you're authenticated?

 Yes it does :/ But I think it's easier to get sites to implement OpenID
 then it is to support HTTP Auth with certificates. Do you think it is
 possible to use OpenID without cookies?

I suspect it's difficult to use OpenID without cookies in today's
browsers.  The challenge is you need some way to bind the session to
the user's browser.  It might be interesting to think about ways that
browsers could make OpenID (or an OpenID-like federated identity
system) more awesome.

Tim, I need to read your paper in more detail, but could you summarize
what problem you're trying to solve by avoiding cookies?

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Safety of extensions (DefCon presentation)

2009-11-29 Thread Adam Barth
On Sat, Nov 28, 2009 at 11:51 PM, Kálmán „KAMI” Szalai
kami...@gmail.com wrote:
 Adam Barth írta:
 It's important to separate two concerns:

 1) Malicious extensions
 2) Honest extensions that have vulnerabilities (benign-but-buggy)

 I agree that the malicious extension problem is somewhat intractable
 because of the above concerns.  However, than news article is
 complaining about vulnerabilities in honest extensions.

 And How can we avoid the Malicious extensions problems'.

I'm not sure.  That's a hard problem, but we can still make progress
on the benign-but-buggy case.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Safety of extensions (DefCon presentation)

2009-11-29 Thread Adam Barth
On Sun, Nov 29, 2009 at 10:19 AM, chris hofmann chofm...@meer.net wrote:
 There is some early thinking about the Jetpack security model at

 https://wiki.mozilla.org/Labs/Jetpack/JEP/29#Jetpack_Security_Model

This looks like a great start.  A few things you might consider:

1) If the Jetpack is not signed with a CA-verified certificate, you
might consider having the developer self-sign the Jetpack.  That could
help with the issues raised in
https://wiki.mozilla.org/Labs/Jetpack/JEP/29#Code_Updating for
unsigned Jetpacks.  (You can check that the update is self-signed with
the same key.)

2) The document doesn't explain how Jetpacks interact with web content
(e.g., web pages).  Many of the vulnerabilities in existing extensions
are due to unsafe interaction with malicious web pages.  You might
consider if there is a way to structure the API for interacting with
web pages to make that easier to do securely.  For example, you could
isolate the the content-touching parts of the Jetpack behind a JSON
message passing interface, like you've done with third-party
libraries.

3) One of the things we found in our study (which Adrienne has made
public at http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-139.pdf)
is that many extensions use nsIFile to store extension-local
persistent state.  You might consider providing an alternative API
(e.g., something like localStorage) that lets Jetpacks store
persistent state without the authority to read and write arbitrary
files.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Safety of extensions (DefCon presentation)

2009-11-29 Thread Adam Barth
On Sun, Nov 29, 2009 at 6:34 PM, Devdatta dev.akh...@gmail.com wrote:
 3) One of the things we found in our study (which Adrienne has made
 public at 
 http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-139.pdf)
 is that many extensions use nsIFile to store extension-local
 persistent state.  You might consider providing an alternative API
 (e.g., something like localStorage) that lets Jetpacks store
 persistent state without the authority to read and write arbitrary
 files.

 The study says that extensions use nsIFile to 'store information that
 is too complex for the preferences system'. What is 'too complex' ? If
 a key value store (viz. the preferences system) isn't good enough,
 would localStorage work ? Do you have a list of extensions for whom
 localStorage would be good enough (but can't work with just the
 preferences system) ?

That's a good question for Adrienne, but my understanding is the prefs
system can only store simple types like integers and booleans.  Now
that JSON.stringify and JSON.parse are implemented in Firefox,
localStorage can store many interesting objects.  I think the HTML5
spec also as support for storing things like images in localStorage,
but I'm not sure that's implemented in Firefox yet.

It's possible that we could design a persistent storage API that's
better optimized for extensions, but I think it makes sense to re-use
interfaces the web platform whenever possible.  For example, if jQuery
adds an abstraction for localStorage, Jetpacks can use that for free.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Safety of extensions (DefCon presentation)

2009-11-26 Thread Adam Barth
Jetpack is an opportunity to rethink the extension security model.
Ideally, an extension platform would make it easier for developers to
write secure extensions.  I'm happy to discuss ideas with folks
off-list.

Adam


On Thu, Nov 26, 2009 at 10:01 AM, Ian G i...@iang.org wrote:
 On 26/11/2009 15:35, Gervase Markham wrote:

 On 25/11/09 18:47, Kálmán „KAMI” Szalai wrote:

 Today, one of leading IT portal published an article about FIrefox with
 this title: Firefox is not safety because of its extensions.

 That's like saying Windows is not safe because of applications.


 Or, SSL has been breached because of phishing ;)

 It is true that to the technical mind that can unravel these things, these
 are different things, but to the general public these can often become the
 same one thing.  So when they blame the big brand, they might be wrong or
 innacurate or just plain confused.  Or they might have been deceived, and
 now the deception is coming back to bite.

 But the problem still exists.  At a minimum, those protecting the big brand
 will need to think about how to distance their brand from the various
 not-so-clear things or utter slanders thrown at them.

 And those who are concerned about security will know what happens next:
  because each side now has a convenient excuse to blame someone else for the
 problem, nothing will be done, and slowly the brand will acquire a
 well-deserved reputation for being insecure.  Seen it all before...



 In thinking about extensions, one would think that providing a portal for
 friendly extensions and dealing with only signed or otherwise checked
 sources would be sufficient.  Is there a sense that these techniques aren't
 working?

 Or is the problem out in the wild wild west where users are just downloading
 any old shlock?


 Installing an extension is like installing an application on your
 machine - it's just as trusted as any other application.


 Right.  Having said that, how does one give the users the tools to figure
 that out?  Or is it the users' responsibility to figure it out by
 themselves?

 To some extent this is the same dilemma the banks find themselves in. They
 were forced to use the platform, against good advice, and now find the
 platform is biting them.  What to do?  They can't go back.  And there is no
 easy forward.



 iang
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Strawman CSP counter proposal

2009-10-28 Thread Adam Barth
Instead of arguing abstractly about design, I've written up a
(mostly!) complete spec for an alternative CSP design:

https://wiki.mozilla.org/Security/CSP/Strawman

I've purposely gone overboard on the directives, but most of these
directives are based on real feature requests I've received from web
developers.  I don't actually think we should do all of them in the
first iteration.  I just wanted to give you a flavor of the kinds of
things you could do with this sort of mechanism.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Adam Barth
On Mon, Oct 26, 2009 at 6:11 PM, Daniel Veditz dved...@mozilla.com wrote:
 They have already opted in by adding the CSP header. Once they've
 opted-in to our web-as-we-wish-it-were they have to opt-out of the
 restrictions that are too onerous for their site.

I understand the seductive power of secure-by-default here.  It's
important to understand what we're giving up in terms of complexity
and extensibility.

 We feel
 extraordinarily strongly that sites should have to explicitly say they
 want to run inline-script, like signing a waiver that you're going
 against medical advice. The only thing that is likely to deter us is
 releasing a test implementation and then crashing and burning while
 trying to implement a reasonable test site like AMO or MDC or the
 experiences of other web developers doing the same.

This statement basically forecloses further discussion because it does
not advance a technical argument that I can respond to.  In this
forum, you are the king and I am but a guest.

My technical argument is as follows.  I think that CSP would be better
off with a policy language where each directive was purely subtractive
because that design would have a number of simplifying effects:

1) Forward and backward compatibility.  As long as sites did not use
the features blocked by their CSP directives, their sites would
function correctly in partial / future implementations of CSP.

2) Modularity.  We would be free to group the directives into whatever
modules we liked because there would be no technical interdependence.

3) Trivial Combination.  Instead of the current elaborate algorithm
for combining policies, we could simply concatenate the directives.
An attacker who could inject a Content-Security-Policy header could
then only further reduce his/her privileges.

4) Syntactic Simplicity.  Instead of two combination operators, ;
for union and , for intersection, we could simply use , and match
standard HTTP header syntax.

Balancing against these pros, the con seem to be that we hope the
additive, opt-out syntax will prod web developers into realizing that
adding script-src inline to the tutorial code they copy-and-paste is
more dangerous than removing block-xss.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Adam Barth
On Tue, Oct 27, 2009 at 12:39 PM, Daniel Veditz dved...@mozilla.com wrote:
 I don't think we're having a technical argument, and we're not getting
 the feedback we need to break the impasse in this limited forum.

I agree that we're not making progress in this discussion.

At a high level, the approach of letting sites to restrict the
privileges of their own content is a rich space for security
mechanisms.  My opinion is that the current CSP design is overly
complex for the use cases it supports and insufficiently flexible as a
platform for addressing future use cases.  If I find the time, I'll
send along a full design that tries to improve these aspects along the
lines I've suggested in the foregoing discussion.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Opt-in versus opt-out (was Re: CSRF Module)

2009-10-27 Thread Adam Barth
On Tue, Oct 27, 2009 at 3:54 PM, Brandon Sterne bste...@mozilla.com wrote:
 I couldn't find a comment that summarizes the model you are proposing so
 I'll try to recreate your position from memory of our last phone
 conversation.

I'll try to find the time to write a complete specification.

 I believe you advocate a model where a site specifies the directives it
 knows/cares about, and everything else is allowed.

The design I suggest is simpler than this.  The site just lists which
restrictions it would like applied to its content.  Each directive is
purely subtractive: adding more directives only further restricts what
the site can do.

 This model would make
 the default allow directive unnecessary.  The main idea is to allow sites
 to restrict the things it knows about and not have to worry about
 inadvertently blocking things it doesn't consider a risk.

That's correct.  I also think it makes sense to package the
restrictions into meaningful directives that address specific
threats.

 My main objection to this approach is that it turns the whitelist approach
 we started with into a hybrid whitelist/blacklist.

The design is a pure blacklist.  Just like turning off unused
operating system services, content restrictions should let web
developers turn off features they aren't using.

 The proposal doesn't
 support the simple use case of a site saying:
 I only want the following things (e.g. script and images from myself).
  Disallow everything else.

The problem is that everything else is ill-defined.  Should we turn
off canvas?  That's a thing that's not a script or an image from
myself.  CSP, as currently design, as a hard-coded universe of
things it cares about, which limits its use as a platform for
addressing future use cases.  It is a poor protocol that doesn't plan
for future extensibility.

 Under your proposal, this site needs to explicitly opt-out of every
 directive, including any new directives that get added in the future.

Not really.  When we invent new directives, sites can opt in to them
by adding them to their policy.  Just like you can opt in to new HTML5
features by adding new HTML tags to your document.

 We're
 essentially forcing sites to maintain an exhaustive blacklist for all time
 in order to avoid us (browsers) accidentally blocking things in the future
 that the site forgot to whitelist.

Web developers are free to ignore CSP directives that mitigate threats
they don't care about.  There is no need for web developers to
maintain an exhaustive list of anything.

 Under your proposed model, a site will continue to function correctly only
 in the sense that nothing will be blocked in newer implementations of CSP
 that wouldn't also have been blocked in a legacy implementation.

That's correct.  The semantics of a given CSP policy does not change
as new directives are invented and added to the language, just as the
semantics of an old HTML document doesn't change just because we
invented the canvas tag.

 From my
 perspective, the blocking occurs when something unexpected by the site was
 included in the page.  In our model, the newer implementation, while
 potentially creating an inconsistency with the older version, has also
 potentially blocked an attack.

You're extremely focused on load resources and missing the bigger picture.

 Are you suggesting that a blocked resource is more likely to have come from
 a web developer who forgot to update the CSP when s/he added new content
 than it is to have been injected by an attacker?

I'm not suggesting this at all.  Nothing in my argument has to do with
probabilities.

 This seems like a
 dangerous assumption.  All we are getting, in this case, is better
 consistency in behavior from CSP implementation-to-implementation, but not
 better security.

Consistency between implementation is essential.  Mitigating important
threats is also essential.  Nether is more important than the other.

 2) Modularity.  We would be free to group the directives into whatever
 modules we liked because there would be no technical interdependence.

 I actually don't see how opt-in vs. opt-out has any bearing at all on module
 interdependence.  Maybe you can provide an example?

Sure.  Suppose I want to implement enough of CSP to let web developers
protect themselves from Type-I and Type-II XSS (e.g., because I view
that the lion's share of the benefit).  How can I do that in the
current CSP design without affecting the targets of XMLHttpRequest?
Surely you agree that restricting the targets of XMLHttpRequests has
little (if anything) to do with mitigating Type-I or Type-II XSS, yet
these parts of CSP are so interdependent that I'm forced to implement
all of them or none of them.

 Let's also not forget that CSP modularity really only helps browser vendors.

Complexity hurts everyone.  The current monolithic CSP design is
overly complex for the security it provides.  There are much simpler
designs that provide the same security benefits.

 From the 

CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 8:58 AM, Mike Ter Louw mter...@uic.edu wrote:
 I've added a CSRF straw-man:

 https://wiki.mozilla.org/Security/CSP/CSRFModule

 This page borrows liberally from XSSModule.  Comments are welcome!

Two comments:

1) The attacker goal is very syntactic.  It would be better to explain
what the attacker is trying to achieve instead of how we imagine the
attack taking place.

2) It seems like an attacker can easily circumvent this module by
submitting a form to attacker.com and then generating the forged
request (which will be sent with cookies because attacker.com doesn't
enables the anti-csrf directive).

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 9:52 AM, Mike Ter Louw mter...@uic.edu wrote:
 I agree.  It seems anti-csrf (as currently defined) would be most beneficial
 for defending against CSRF attacks that don't require any user action beyond
 simply viewing the page (e.g., img src=attack).

Maybe we should focus the module on this threat more specifically.  My
understanding is that this is a big source of pain for folks who
operate forums, especially for user-supplied images that point back to
the forum itself.  What if the directive was something like
cookieless-images and affected all images, regardless of where they
were loaded from?

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 10:15 AM, Mike Ter Louw mter...@uic.edu wrote:
 I think this is a good start, and should be an option for sites that don't
 want CSP to provide any other CSRF restrictions.  I've added an additional
 directive to the wiki, but it needs further definition.

I think it might be better to focus this module on the forum poster
threat model.  Instead of assuming the attacker can inject arbitrary
content, we should limit the attacker to injecting content that is
allowed by popular form sites (e.g., bbcode).  At a first guess, I
would limit the attacker to text, hyperlinks, and images.  (And maybe
bold / italics, if that matters.)

On Thu, Oct 22, 2009 at 10:16 AM, Devdatta dev.akh...@gmail.com wrote:
 I don't understand. In each of the cases above, the attacker site will
 not enable the directives and img requests or form requests from his
 page will cause a CSRF to occur.

We might decide to concern ourselves only with zero click attacks.
Meaning that once the user has clicked on the attacker's content, all
bets are off.  If we imagine a 1% click-through rate, they we've
mitigated 99% of the problem.

On Thu, Oct 22, 2009 at 10:19 AM, Devdatta dev.akh...@gmail.com wrote:
 requiring it to implement this policy regardless of the running script
 context would require the UA to maintain a cache of policies for each
 site the user has visited. This is against the requirements of the
 base module. And I for one am against any such type of caching
 requirement in the UA.

I agree that directives should affect only the current page.

On Thu, Oct 22, 2009 at 10:31 AM, Mike Ter Louw mter...@uic.edu wrote:
 For image CSRF, some protection would be required against redirection.
 Either redirection must be disallowed, or anti-csrf needs to be enforced
 for all redirections until the resource is located.  But I'm not sure if
 the latter is going to work if CSP policies are not composeable, and any
 of the redirections or the image itself defines a CSP policy.

I agree that cookieless-images should affect all redirects involved in
loading the image.

 Form requests to attacker.com would presumably be blocked, as
 attacker.com isn't in |self| nor the whitelist.  So the attacker won't
 be able to direct the user to a page without anti-csrf protection using
 forms.  But again this requires some enforcement of the whitelist during
 any redirects.

I think we should assume that the attacker cannot inject form elements
because this is uncommon in forum web sites.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 12:36 PM, Mike Ter Louw mter...@uic.edu wrote:
 In this case, this boils down to: should CSP directives be threat-centric or
 content-type-centric?  Alternatively, this may be an example of CSP being
 too granular.

I suspect we'll need to experiment with different approaches before we
have a good idea how to answer this question.  In intuition tells me
that we'd be better off with a threat-centric design, but it's hard to
know ahead of time.

On Thu, Oct 22, 2009 at 12:53 PM, Mike Ter Louw mter...@uic.edu wrote:
 Is it acceptable (not too strict) to block all form submission to non-self
 and non-whitelisted action URIs when the anti-csrf directive is given?  If
 so, then the above usability issue may be moot: we can have anti-csrf imply
 an as-yet-undefined directive that blocks form submission.

Instead of bundling everything together into anti-csrf, we might be
better off with a directive to control where you can submit forms,
e.g., form-action, but we seem to be getting far afield of the
problem you're trying to solve.

At a high level, I'm glad that you took the time to add your ideas to
the wiki, and I hope that other folks will do the same.  My personal
opinion is that the current design has room for improvement,
particularly around clarifying precisely what problem the module is
trying to solve, but my opinion is just one among many.  I'd like to
encourage more people to contribute their ideas in the form of
experimental modules, and hopefully the best ideas will rise to the
top.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Required CSP modules (was Re: CSRF Module)

2009-10-22 Thread Adam Barth
See inline.

On Thu, Oct 22, 2009 at 2:22 PM, Brandon Sterne bste...@mozilla.com wrote:
 I'd like to take a quick step back before we proceed further with the
 modularization discussion.  I think it is fine to split CSP into modules,
 but with the following caveats:

 1. Splitting the modules based upon different threat models doesn't seem to
 be the right approach.  There are many areas where the threats we want to
 mitigate overlap in terms of browser functionality.  A better approach,
 IMHO, is to create the modules based upon browser capabilities.  With those
 capability building blocks, sites can then construct policy sets to address
 any given threat model (including ones we haven't thought of yet).

It's unclear to me which organization is better.  I'd be in favor of
picking one and giving it a try.

 2. The original goal of CSP was to mitigate XSS attacks.

I agree that XSS mitigation is the most compelling use case for CSP.

 The scope of the
 proposal has grown substantially, which is fine, but I'm not at all
 comfortable with a product that does not require the XSS protections as the
 fundamental core of the model.  I think if we go with the module approach,
 the XSS protection needs to be required, and any additional modules can be
 optionally implemented.

I'm not sure it matters that much whether we label the XSS mitigations
recommended or required.  I suspect every browser vendor that
implements CSP will implement them.  If you'd prefer to label them
required, I'm fine with that.

  I propose that the default behavior for CSP (no
 optional modules implemented) is to block all inline scripts (opt-in still
 possible) and to use a white list for all sources of external script files.

This is a separable issue.  I'm not sure whether it's better to opt-in
or opt-out of this behavior.  Opting-in makes policy combination
easier to think about (the tokens just accumulate).

I'd prefer if sites had to opt-in to the block-eval behaviors because
I suspect complying with those directives will require substantial
changes to sites.

 The script-src directive under the current model serves this function
 perfectly and doesn't need to be modified.  (We can discuss how plugin
 content and CSS, which can be vectors for script, should be governed by this
 core XSS module.)

That depends on whether we decide opt-in or opt-out is better for
controlling inline script and eval-like APIs.

 As a straw man, the optional modules could be:
  * content loading (e.g. img-src, media-src, etc.)
  * framing (e.g. frame-src, frame-ancestors)
  * form action restriction
  * reporting (e.g. report-uri)
  * others?

I'd but frame-src in with content loading, but otherwise this seems fine.

 I'm definitely not opposed to splitting apart the spec into modules,
 especially if it helps other browser implementers move forward with CSP.  I
 REALLY think, though, that the XSS protections need to be part of the base
 module.

I don't think it matters that much whether the XSS mitigations are
part of the base module or whether they're in a separate
required/recommended module.  I think the main issue here is making
the spec easy to read.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Adam Barth
On Thu, Oct 22, 2009 at 5:22 PM, Brandon Sterne bste...@mozilla.com wrote:
 Take XSS and history stealing for example.  Assume these are seperate
 modules and each is responsible for mitigating its respective threat.
 Presumably the safe history module will prevent a site from being able
 to do getComputedStyle (or equivalent) on a link from a different
 origin.  But an attacker could still steal history from any site that he
 can inject script into by document.writing the list of URLs into the
 page, testing if they are visited, and sending the results back to the
 attacker's site.  Granted, this is a contrived example and the attacker
 could probably do worse than history stealing if we're allowing that he
 can inject arbitrary script.  But the point is that the threat of
 history stealing is not fully mitigated by changes to CSS for
 cross-origin links.  A complete mitigation of the threat requires both
 altering the behavior of getComputedStyle as well as disabling
 non-trusted scripts in the document.

I don't think this argument makes sense.  When people complain about
history stealing, e.g. on
https://bugzilla.mozilla.org/show_bug.cgi?id=14, they're not
worried about the case when their site has XSS.  They're worried about
a much weaker attacker who simply operates a web site.

 Why, though, would we ever want to
 change from an opt-in to an opt-out model?

I don't think we'll want to change in the future.  We should pick the
better design now and stick with it (whichever design we decide is
better).

 I think it's better to have sites be explicit with their policies, as it
 forces them to understand the implications of each part of the policy.
 If we provide pre-canned policies, sites may wind up with incorrect
 assumptions about what is being restricted.

I agree, but if you think sites should be explicit, doesn't that mean
they should explicitly opt-in to changing the normal (i.e., non-CSP)
behavior?

 The situation I
 want to avoid is having browsers advertise (partial) CSP support and
 have websites incorrectly assume that they are getting XSS protection
 from those browsers.

I don't understand.  There is no advertisement mechanism in CSP.  Do
you mean in the press?

What's actually going to happen is that thought leaders will write
blog posts with sample code and non-experts will copy/paste it into
their web sites.  Experts (e.g., PayPal) will read the spec and test
various implementations.

As for the press, I doubt anything we write in the spec will have much
impact on how the press spins the story.  Personally, I don't care
about what the press says.  We should design the best mechanism on a
technical level.

 Also, it seems unlikely to me that successful
 mitigations can be put in place for the other threats if XSS is still
 possible  (I can provide examples if people are interested, but I have
 to run to catch a train, unfortunately).

It seems very reasonable to mitigate history stealing and ClickJacking
without using CSP to mitigate XSS.  As a web developer, I can't do
anything about history stealing myself.  I need help from the browser.
 On the the other hand, I can do something about XSS myself.

 If we can agree that XSS is
 the main threat that we want to address with CSP, then I think we can
 also agree to make it a required module.

I think we're all agreed on this point.  Our current disagreements appear to be:

1) Whether frame-src should be in the resources module or in the same
module as frame-ancestor.
2) Whether sites should have to opt-in or opt-out to disabling inline
script and/or eval-like APIs.

I have a few more minor points, but we can get to those after we
settle the above two.

I think the way forward is for me (or someone else if they're
interested) to write up our current thinking on the wiki.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 12:47 PM, Mike Ter Louw mter...@uic.edu wrote:
 The threat model of HistoryModule, as currently defined, seems to be
 precisely the threat model that would be addressed by a similar module
 implementing a per-origin cache partitioning scheme to defeat history timing
 attacks.

Good point.  I've added cache timing as an open issue at the bottom of
the HistoryModule wiki page.

 If these are to be kept as separate modules, then perhaps the threat model
 should be more tightly scoped, and directive names should be specific to the
 features they enable?

It's somewhat unclear when to break things into separate modules, but
having one module per threat seems to make sense.  The visited link
issue and the cache timing issue seem related enough (i.e., both about
history stealing) to be in the same module.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Straw-main XSSModule for CSP

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 1:26 PM, Mike Ter Louw mter...@uic.edu wrote:
 I'm not sure if hacking at the straw man should occur on the list or on the
 wiki.  Please let me know if it should go to the wiki.

I've be inclined to discuss feedback on the mailing list where others
can see and comment most easily.

 Threat Model:

 We further assume the web developer wishes to prevent the attacker from
 achieving any of the following goals:

  * The attacker must not learn the contents of the target web site's
 cookies.

 A broader definition than cookie stealing that also covers integrity issues
 like defacement could be:

  * The attacker's sequence of injected bytes are interpreted as one or more
 script instructions and executed with the privileges of the (CSP-protected)
 document.

I tried to tighten down the attacker's goals to keep a narrow focus
for the module.  Running script with the page's privileges seems more
like a means to an end rather than a goal unto itself.  Although you
could argue that stealing the cookie is also just a means to a
different end.  Either is probably fine, but I'm inclined to leave it
as is for now.

 If the purpose of the threat model is to scope out the protections afforded
 by the module, then the following may be more appropriate:

  * The attacker's sequence of injected bytes are interpreted as an inline
 script (i.e., script element without |src| attribute, script element
 attribute, javascript: URI, dynamic CSS, etc.)

  * The attacker's sequence of injected bytes are interpreted as a reference
 to external script, where the external script is located at a different
 origin to the document protected by CSP

  * The attacker's sequence of injected bytes are compiled as a result of
 executing an allowed script (e.g., via eval(), setTimeout(), setInterval(),
 or Function constructor)

These are too syntactic for an attacker goal.  They pre-suppose a
particular solution.

 block-xss directive:

 The effects of this directive are given in a default-allow style, which
 could lead to gaps in protection.  (Some possible gaps are commented on in
 the Open Issues section.)  Could the effects of block-xss be specified as
 exceptions to a default-deny policy?

This is a good point.  I wrote this as a series of MUST NOT
requirements to make it easy to implement and test.  We should do a
better job of explaining the why behind the requirements.  If we've
missed anything, we should add more requirements to make sure each
implementation behaves correctly.  Maybe we should add a catch-all
MUST NOT requirement that covers anything we've forgotten?

 Open Issues section:

 IE's CSS behaviors and expressions could fit in the same category as XBL
 bindings, as they are non-standard features that can be used as XSS vectors

I've added this to the list of open issues.  The catch-all MUST NOT
might be sufficient to get these.  We can of course mention them in a
non-normative note to remind implementors.

Thanks!
Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
In the modular approach, this is not true.  You simply send this header:

X-Content-Security-Policy: safe-history

The requirements to remove inline script, eval, etc aren't present
because you haven't opted into the XSSModule.  You can, of course,
combine them using this sort of policy:

X-Content-Security-Policy: safe-history, block-xss

but you certainly don't have to.

Adam


On Tue, Oct 20, 2009 at 1:59 PM, Devdatta dev.akh...@gmail.com wrote:
 The history enumeration threat is a simple threat with a simple
 solution. Opting into Safe History protection shouldn't require me to
 do all the work of opting into CSP. In addition, I don't see any
 infrastructure that is needed by this feature that is in common with
 CSP.

 Lets say I am a website adminstrator, and I am concerned about this
 particular threat . Opting into CSP involves a lot of work -
 understanding the spec, noting down all the domains that interact
 everywhere on my site, removing inline scripts and evals and
 javascript URLs to corrected code etc. etc. My fear is that this will
 make admins write policies that are too lenient (say with allow-eval)
 , just to get the safe history feature.

 Cheers
 Devdatta

 2009/10/20 Adam Barth abarth-mozi...@adambarth.com:
 On Tue, Oct 20, 2009 at 12:50 PM, Devdatta dev.akh...@gmail.com wrote:
 Regarding , History enumeration -- I don't see why it should be part
 of CSP. A separate header - X-Safe-History can be used.

 I think one of the goals of CSP is to avoid having one-off HTTP
 headers for each threat we'd like to mitigate.  Combining different
 directives into a single policy mechanism has advantages:

 1) It's easier for web site operators to manage one policy.
 2) The directives can share common infrastructure, like the reporting
 facilities.

 Adam


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 1:42 PM, Collin Jackson
mozi...@collinjackson.com wrote:
 I think we're completely in agreement, except that I don't think
 making CSP modular is particularly hard. In fact, I think it makes the
 proposal much more approachable because vendors can implement just
 BaseModule (the CSP header syntax) and other modules they like such as
 XSSModule without feeling like they have to implement the ones they
 think aren't interesting. And they can experiment with their own
 modules without feeling like they're breaking the spec.

I've factored the BaseModule out of the XSSModule, so it's clear that
you could implement the HistoryModule without the XSSModule.  I'd be
happy to take a crack at breaking up the main CSP spec into modules on
the wiki if you'd like to see what that would look like.  I don't
think it would be that hard.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Versioning vs. Modularity (was Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Adam Barth
On Tue, Oct 20, 2009 at 3:21 PM, Lucas Adamski lu...@mozilla.com wrote:
 I've been a firm believer that CSP will evolve over time but that's an
 argument for versioning though, not modularity. We are as likely to have to
 modify existing behaviors as introduce whole new sets.  It's also not a
 reason to split the existing functionality into modules.

I'm not sure versioning is the best approach for web technologies.
For example, versioning has been explicitly rejected for HTML,
ECMAScript, and cookies.  In fact, I can't really think of a
successful web technology that uses versioning instead of
extensibility.  Maybe SSL/TLS?  Even there, the modern approach is to
advance the protocol with extensions (e.g., SNI).

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


ClickJackingModule (was Re: Comments on the Content Security Policy specification)

2009-10-20 Thread Adam Barth
Thanks Devdatta.  One of the nice thing about separating the
clickjacking concerns from the XSS concerns is that developers can
deploy a policy like

X-Content-Security-Policy: frame-ancestors self

without having to make sure that all the setTimeout calls in their web
app use function objects instead of strings.

Adam


On Tue, Oct 20, 2009 at 6:05 PM, Devdatta dev.akh...@gmail.com wrote:
 On a related note, just to have one more example (and for my learning)
 , I went ahead and wrote a draft for ClickJackingModule.
 https://wiki.mozilla.org/Security/CSP/ClickJackingModule

 In general I like how short and simple each individual module is.

 Cheers
 Devdatta
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-10-19 Thread Adam Barth
On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
john...@mozilla.com wrote:
 Not as limited as you might like. Remember that even apparently
 non-dangerous constructs (e.g. background-image, the :visited pseudo class)
 can give people power to do surprising things (e.g. internal network ping
 sweeping, user history enumeration respectively).

I'm not arguing for or against providing the ability to
block-inline-css, but keep in mind that an attacker can do all those
things as soon as you visit attacker.com.

There are many ways for the attacker to convince the user to visit
attacker.com.  In the past, I've found it helpful to simply assume the
user is always visiting attacker.com in some background tab.  After
all, Firefox is supposed to let you view untrusted web sites securely.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Straw-main XSSModule for CSP

2009-10-17 Thread Adam Barth
Hi dev-security,

On Friday, I spoke with Sid, Brandon, and dveditz about dividing the
Content Security Policy specification into modules targeted at
specific threats.  This approach as two main benefits:

1) Different browser vendors can implement CSP incrementally by
deploying the most important modules first.
2) A modular approach gives us more flexibility if we wish to target
other threats in the future.

I've taken the liberty of sketching out a straw-man XSSModule for CSP
on the Mozilla wiki:

https://wiki.mozilla.org/Security/CSP/XSSModule

The XSSModule defines a forwards-compatible syntax for the
X-Content-Security-Policy header and defines three directives useful
for mitigating XSS vulnerabilities: (1) block-xss, (2) block-eval, and
(3) script-src.  In the common case, a web site can mitigate XSS
vulnerabilities by including the following content security policy:

X-Content-Security-Policy: block-xss

The block-eval directive lets the site further mitigate DOM-based XSS
attacks by blocking eval-like constructs.  The script-src directive
provides for finer-grained control over loading external scripts in
case the web site loads scripts from other origins.

In principle, we could factor a BaseModule out of the XSSModule that
defines the general syntax of the header and the semantics of
origin-lists, but I've created a single document for clarity.  I've
omitted a number of features from the main CSP specification (and
changed some details to improve extensibility), but we can add those
features in separate modules.  For example, we could define a
ReportingModule that contains the reporting machinery.

I welcome your feedback,
Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: fyi: Strict Transport Security (STS) specification

2009-10-10 Thread Adam Barth
On Sat, Oct 10, 2009 at 1:19 PM, Florian Weimer f...@deneb.enyo.de wrote:
 Does this address the lack of enforcement of the EV certificate
 security level (i.e. it is usually sufficient to get any
 browser-recognized certificate if I want to attack an EV site,
 *without* disabling the EV UI)?

Strict-Transport-Security does not address that threat model.  Mozilla
has proposed an extension to STS, called lockCA, that does address
that threat model.

Adam
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security