Re: Content Security Policy feedback
Sorry I haven't been more vocal on this thread lately. I think it's important that we keep our momentum moving forward here if we hope to get something meaningful implemented any time soon. I am getting the sense that we aren't in agreement on one or two of the fundamental goals of this project and I think it potentially jeopardizes overall progress if we are working with different base assumptions. My near-term goal is to start driving toward a stable design (if not specification) for CSP. The design is certainly still open for comments and feedback, but those discussions will be easier to resolve after we've settled the issue of project goals. More below... On Dec 23 2008, 7:34 am, Gervase Markham g...@mozilla.org wrote: I am not arguing we should make CSP work a random 50% of the time. I am arguing that CSP is not a security model, it's a phew, I would have just got stuffed, but it saved me this time model. Security models are things you rely on. CSP is a second line of defence for when your security model fails, and it doesn't promise to save your ass every time. I think that CSP should be considered part of the browser security model. Mike and others have made the excellent point that there are significant costs to bear for a website that wants to start using this model: policy development as well as migrating inline scripts to external script files. Websites will not be willing to pay this cost if user agents are not strongly committed to enforcing the policies. We won't be able to make security guarantees like XSS will never happen on your site, but we can provide smaller guarantees like inline script will not execute in this page if the CSP header is sent. I have previously agreed with Gerv's belt-and-(suspenders|braces) logic with regard to CSP as it had twofold appeal to me: 1) it is consistent with the defense-in-depth approach found elsewhere in computer security, and 2) it provided an escape hatch from design flaws, implementation bugs, or other deficiencies later discovered with the model. It appears now, though, that this issue is impeding us a bit and I am going to weigh in on the side of stronger commitment to policy enforcement. Perhaps a stronger design is produced as the result of a firm commitment to CSP as a part of the browser security model (or perhaps it is required by such a commitment). ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Re: Site Security Policy
On Jul 12, 10:35 am, Evert | Rooftop [EMAIL PROTECTED] wrote: Sorry if this was already brought up in this thread (or if its a closed subject), but using headers vs. a policy file is a bad idea, for the following reasons: * Allows caching * Allows usage of the policy on a site where there's no scripting available (static content servers?) * Allows a policy to enforced on a domain-level, instead of for every html page * Removes the HEAD before POST requirement The last one is an important one for a different reason as well. PHP, as an example, will execute scripts the same way regardless if its HEAD, POST or GET, so this could produce unwanted results on existing sites, not to mention a bandwidth and time overhead. Hi Evert, I appreciate your comments and I am working hard on a set of changes to the proposal based on a lot of feedback I've received both on the newsgroup and in private communications. These changes, I think, encompass the comments you made. First, we do plan to support both HTTP headers as well as files for policy transmission. That request has come from a number of people, so it seems wise to give the people what they want :-) I am working hard on modifying the proposal document to include these changes which are fairly broad. I will spare most of the details now and will post to the newsgroup when those changes have been published. With regard to your last comment (re: PHP treating all requests equally), I don't think that's quite accurate. Applications written using the $_REQUEST super-global will suffer from that, but using $_REQUEST only is not a best practice and most web applications should reasonably be expected to differentiate POST, GET, and HEAD. However, this point may be moot as we are starting to consider other options for CSRF protection rather than the pre-flight requests originally proposed. It may be the case that adding such policy requests for all cross-site POSTs will have too high an impact on bandwidth, round trips, etc. Jackson, Barth and Mitchell have written a paper regarding CSRF protection that utilizes a new HTTP header, Origin: http://crypto.stanford.edu/websec/csrf/ An Origin header has also been proposed in the W3C's Access-Control spec. I would be happy to hear feedback on utilizing this model instead of the browser-based ingress/egress filtering model which was originally proposed. In my opinion, it has several benefits most notably: 1) ease of implementation for user agents, and 2) adds no additional round trips and minimal additional bandwidth. It will also be consistent with the Access-Control spec. Thoughts? -Brandon ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Re: Site Security Policy
On Jul 10, 8:47 am, [EMAIL PROTECTED] wrote: The problem is that although solutions exist to both of these problems, developers have not properly implemented the solution. With your approach of SSP and safe requests, you are again relying on the developer to use the solution correctly, and put all modifications behind a POST request. You have not removed the reliance on the developer to code correctly, just simply shifted it to a different 'thing' that they have to do. Perhaps I am misunderstanding this point. Are you suggesting that an ideal model wouldn't require that web developers do anything differently than they currently are? Site Security Policy is intended to be a belt-and-suspenders tool to protect sites and users, but we are still advocating that developers keep their web applications free of vulnerabilities. From previous conversation with Terri, asking for an example of where a GET request can be used to affect change on a web-site is asking for us to find a security vulnerability in a website, since this is basically the definition of a XSRF. Large sites are going to be well protected against this type of thing, but I'm sure if you look at any of the recent CERT vulnerabilities regarding XSRF or XSS you'll notice that they are all exploitable through GET requests. The restriction of CSRF protection to POST was not because we think CSRF isn't common via GET, it is because there are too many ways that cross-site GETs are possible, and in current legitimate use, to make mitigating them worthwhile. ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Site Security Policy
I've recently published a proposal for Site Security Policy, a framework for allowing sites to describe how content in their pages should behave (thanks, Gerv): http://people.mozilla.com/~bsterne/site-security-policy I'm creating a placeholder for any discussion that comes out of that publication. I hope to collect here people's ideas for proposed functionality as well as other details which may be useful in creating a common specification. ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security