Re: Banned on MDN for adding referer warnings, please help.

2018-10-08 Thread Bil Corry
On Mon, Oct 8, 2018 at 3:23 PM R0b0t1  wrote:

> If they did ban him with no warning for proposing a feature(?) then I
> think it is worth mentioning. There have been other very strange
> executive decisions that I think need discussed as well, mostly
> related to how ads are served in Firefox, but I don't want to bring
> them up right now.
>

If you read through the thread, it's clear Mozilla asked him multiple times
to stop adding red warning banners to MDN, then revoked his access when he
didn't comply.  Do you think there is something different that Mozilla
could have done to handle this situation better?


- Bil
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Banned on MDN for adding referer warnings, please help.

2018-10-08 Thread Bil Corry
Hi Mark,

Wow, there's a lot to unpack in this email thread.

I'm not clear what you're asking for, I think you want community support to
add back in your warning banners regarding referrer privacy issues.  If so,
please send a new email to the community asking for opinions about adding a
warning banner and mention that Mozilla is against adding the banners, but
please do not quote this entire thread.  I have some thoughts about your
ask, but this thread is now unfortunately about the nature of your
disagreement with Mozilla instead of the substance.  A new thread will
allow you to seek community feedback about your specific ask.

Regarding Mozilla's behavior and your own; my interpretation reading
through the thread is that Mozilla heard your feedback, incorporated it
into MDN according to how they manage their content, and asked you not to
add the warning label.  It escalated and they revoked your access to modify
content.


- Bil


On Mon, Oct 8, 2018 at 9:36 AM Mark Richards 
wrote:

> Hey
>
> I've had my MDN account banned for trying to add referer warnings onto 
> and  elements or worse banned for involving authorities who are
> investigating the mess of microtargeting. It appears MDN are refusing a
> warning on the grounds it isn't a nice presentation, regardless of how
> irresponsible it is to not include it.
>
> I need help and as the security devs I hope given the extent to which
> Firefox has added config features and policies to try to reduce the referer
> mess there are community members who understand how significant this is.
> Whatever Firefox tries, other browsers have a bigger market share.
> Documenting the referer risks in MDN does stand a chance of better
> educating developers so they start paying attention to their third parties
> and for many it is imperative to do so given GDPR changes.
>
> A developer I know who recently finished a three month intensive course on
> the web advised there was no coverage of referer, which matches my CS
> degree experience over a decade ago. This isn't a one-off to me, I have met
> many developers who don't understand the risk or even have misconceptions
> about how it works (like thinking it's not sent on https sites). However
> this developer did say the course used MDN to teach about the web features
> and this matches my development experience, MDN is very much respected by
> the Dev community. It may well be the case MDN has a greater market share
> in the Dev community as an educational resource, than Firefox does with
> consumers as a browser.
>
> The Mozilla security blog has been multiple references to referers over the
> years, but most people are still having their browser history distributed
> piecemeal by it. Browsers still don't protect referers by default and even
> if that changed tomorrow it might be 5-10 years before everyone upgrades
> their various devices.
>
> https://blog.mozilla.org/security/2015/01/21/meta-referrer/
>
> https://blog.mozilla.org/security/2018/10/02/supporting-referrer-policy-for-css-in-firefox-64/
>
> https://blog.mozilla.org/security/2018/01/31/preventing-data-leaks-by-stripping-path-information-in-http-referrers/
>
> With GDPR, the rules changed and are being copied in other jurisdictions.
> Businesses must have accountability and privacy by default, so referer is
> in conflict with local legislation not just because privacy or security
> breaches may have happened, but primarily because a business has to assess,
> document and decide on the risks of which systems get data about a user of
> their sites. Profiling of users, made possible by referers by default, was
> one of the motives for GDPR so is rightly part of regulators investigations
> now, but I'm not sure the regulators realised that the technical feature at
> the centre of it is the referer and how broken the web is for privacy.
> Tracking pixels are an image, a cookie and referers... The cookies have
> long been part of data protection discussions and laws, yet you can profile
> someone without a cookie (IP address) you can't do it without the referer
> (unless explicitly add the same functionality by code to the url, at which
> point it is an explicit act and can be justified by the author). Many
> places shouldn't get a referer, like CDNs. Most CDNs need to know who to
> charge (API key?) not a full referer.
>
> China is very interesting, the headlines aren't necessarily fines for data
> protection violations but over 11000 arrests. How many of those are web
> developers or directors of companies because of their website? How many
> will it be in the future as regulators realise it's not the ad companies
> that steal this data, but it's given away by websites protecting users
> referers?
>
> https://asia.nikkei.com/Business/Business-Trends/China-s-strict-new-cybersecurity-law-ensnares-Japanese-companies
> .
>
> UK has criminal prosecution options in its data protection laws and I hope
> that those in the UK responsible for keeping tracking on the 

Re: App Tabs in Firefox 4

2010-06-27 Thread Bil Corry
Sid Stamm wrote on 6/25/2010 12:17 PM: 
 Once it is an app tab, any links directing the user off the site
 will open in a new standard tab, so that the user won't be switching 
 top-level document domains in the app tab.

A couple of years back, I had a similar idea I called pinned tabs [1], but 
the focus was exploring ways to passively logout the user.  With App Tabs, do 
the tabs ever get closed?  And if not, what effect will that have on sites that 
use a tickler to determine if the user is still on the site?  I'm wondering if 
for some sites, the user will never be logged out.


- Bil


[1] (read the last line) 
https://lists.owasp.org/pipermail/owasp-intrinsic-security/2008-November/72.html
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


CSP: MUST not or MUST NOT?

2010-06-12 Thread Bil Corry
Perhaps I've been hanging around the IETF lists too long, but shouldn't all of 
the MUST not in the CSP spec really be MUST NOT?

http://www.ietf.org/rfc/rfc2119.txt

- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Allow CSP on HTML meta tags

2010-02-28 Thread Bil Corry
Axel Dahmen wrote on 2/28/2010 5:28 AM: 
 I've read through the CSP specs
 (https://wiki.mozilla.org/Security/CSP/Spec#Source_Expression_List) and the
 Talk (https://wiki.mozilla.org/Talk:Security/CSP/Spec)...
 
 What I'm missing is a statement about allowing CSP directives in HTML
 meta
 tags.
 
 Use case:
 -
 My provider just provides the ability to upload HTML and related content,
 but they don't provide an option to manipulate the server's output to any
 degree. So configuring HTTP response headers is not possible here. However,
 I want to protect my web pages just like any other. So the only option I
 would have to get CSP applied would be through using HTML meta tags.

CSP used to support meta policies, but was removed.  You probably want to 
read through these:

http://blog.sidstamm.com/2009/06/csp-with-or-without-meta.html

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/571f1495e6ccf822/cf15e2be59a72734?lnk=gstq=meta#cf15e2be59a72734

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/c0f1a44e4fb98859/31465e3d46ccf806?lnk=gstq=meta#31465e3d46ccf806

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/87ebe5cb9735d8ca/f9167000431aa6a4?lnk=gstq=meta#f9167000431aa6a4

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/571f1495e6ccf822/5f75c00c023696bd?lnk=gstq=meta#5f75c00c023696bd

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/87ebe5cb9735d8ca/87796e2d9caeb36f?lnk=gstq=meta#87796e2d9caeb36f

There's probably more:


http://groups.google.com/group/mozilla.dev.security/search?group=mozilla.dev.securityq=metaqt_g=Search+this+group


- Bil
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox Add-ons

2010-02-08 Thread Bil Corry
I think such a document could go a long way to help people understand how 
Mozilla protects them, the limitations that are faced, and what happens when 
something goes wrong.  If they still feel like it isn't enough, then they can 
be prompted to suggest improvements to the process.

Speaking of improving the process, I agree with Daniel Veditz that the 
experimental add-ons should be made available on another site.  Even the term 
'experimental' gives the impression (to me anyway) that the add-on is potential 
beta quality, not potential pwnage.  Maybe 'unverified add-on' would be more 
appropriate.


- Bil


Sid Stamm wrote on 2/8/2010 3:56 PM: 
 Hi Bil,
 
 I don't believe we have a document precisely along the lines of what you
 suggest (as far as I know) but we have these other documents that are
 sometimes helpful:
 
 https://developer.mozilla.org/en/Security_best_practices_in_extensions
 https://addons.mozilla.org/en-US/developers/docs/policies
 https://addons.mozilla.org/en-US/developers/docs/policies/reviews
 
 -Sid
 
 On 2/7/10 10:02 AM, Bil Corry wrote:
 Eddy Nigg wrote on 2/6/2010 7:04 AM: 
 Isn't it about time that extensions and applications get signed with
 verified code signing certificates? Adblock Plus is doing for a while
 now I think, perhaps other should too?

 Because this isn't really comforting:
 http://www.theregister.co.uk/2010/02/05/malicious_firefox_extensions/

 Not sure if it already exists, but it would be helpful if there was a 
 document that describes the security practices of AMO; something that 
 outlines the responsibilities of Mozilla, of the AMO developers, and the 
 users, along with outlining the risks involved and what happens when they're 
 realized (such as using the block mechanism).  That way, when news such as 
 the above is reported, this document can be referenced.

 Threats to address, that at least I'm aware of:

 (1) Malware in add-ons (see above article)

 (2) Trusted add-ons subverting each other

  
 http://hackademix.net/2009/05/04/dear-adblock-plus-and-noscript-users-dear-mozilla-community/
  
 (3) Untrusted add-ons doing bad stuff.

 (4) Fake add-ons posing as a trusted add-on:

  http://www.webappsec.org/lists/websecurity/archive/2010-01/msg00128.html

 (5) Trusted add-ons that pose a security risk:

  
 http://blog.mozilla.com/security/2009/10/16/net-framework-assistant-blocked-to-disarm-security-vulnerability/

 (6) Subverting the update mechanism (this is for FF, but might apply to 
 add-on updates too?):

  
 http://ha.ckers.org/blog/20100204/releasesmozillaorg-ssl-and-update-fail/

 (7) Subverting the blocklist mechanism (to disable, say, noscript):

  https://support.mozilla.com/en-US/kb/Add-ons+Blocklist


 I'm sure there are many many more.

 BTW, this presentation from OWASP DC names Eddy Nigg, Giorgio Maone, and 
 developers at Mozilla (among others) as The 10 least-likely and most 
 dangerous people on the Internet:

  
 http://www.owasp.org/images/1/1f/The_10_least-likely_and_most_dangerous_people_on_the_Internet_-_Robert_Hansen.pdf


 - Bil
 

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: logout rel extension

2009-11-26 Thread Bil Corry
Justin Dolske wrote on 11/24/2009 10:33 PM: 
 On 11/24/09 12:16 AM, Bil Corry wrote:
 We eventually came up with the idea of using a rel extension[2] to
 specify a logout feature[3]; the browser pings the server when all
 related windows/tabs are closed.
 
 I'm not sure if the when all related windows/tabs are closed part is
 interesting (eg, what to do when that happens because the browser
 crashed, or the browser doesn't support the rel extension?).

Yes, the fallback method would be a session expiration of some kind.


 OTOH, there has been some brainstorming around how to improve identity
 and logins in general. Form-based password management is basically a
 hack, so it would be nice to have a more formal syntax to tell the
 browser how to login and logout from the site. We can (in theory) mostly
 do this with HTTP authentication, but logins based on forms and cookies
 are far more common.

It may be this problem is better solved by a group working on new UA 
authentication methods.


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


logout rel extension

2009-11-24 Thread Bil Corry
Some time ago on the HTML5 list[1], I brought up the problem that there wasn't 
a straightforward way for a server to determine when the user had closed all 
windows/tabs.  We eventually came up with the idea of using a rel 
extension[2] to specify a logout feature[3]; the browser pings the server 
when all related windows/tabs are closed.

I am soliciting feedback on the idea: is this something that Mozilla would 
consider adding to Firefox?

Currently, the only way that I'm aware of to determine when a user has closed 
all related windows/tabs is by having the browser poll the server at a regular 
interval, and once the polling stops, the server knows the user is no longer 
actively using the site.

Thanks,

- Bil


[1] When closing the browser thread:

http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2008-December/thread.html#17764

http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-April/thread.html#19406

http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-June/thread.html#20150

[2] http://wiki.whatwg.org/wiki/RelExtensions
[3] http://wiki.whatwg.org/wiki/LogoutRelExtension

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Autoconfig ISP fetch security review

2009-11-05 Thread Bil Corry
Gervase Markham wrote on 11/5/2009 2:00 AM: 
 On 05/11/09 04:58, Bil Corry wrote:
 You may want to consider registering a /.well-known/ path for this,
 which it seems perfectly suited for:

  http://tools.ietf.org/html/draft-nottingham-site-meta
 
 That draft seems like a let's make the best of it way of dealing with
 an unfortunate inevitability :-|.

For anyone who has suggestions or recommendations to improve it, it's being 
discussed on IETF apps-discuss:

https://www.ietf.org/mailman/listinfo/apps-discuss


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: dns-prefetch

2009-07-24 Thread Bil Corry
Johnathan Nightingale wrote on 7/24/2009 9:26 AM: 
 On regular http connections, this kind of disclosure is obviously
 inevitable since the page contents themselves are visible to
 eavesdroppers, but when the connection is over https, there is a
 reasonable expectation of some privacy, so we try to preserve it as much
 as possible.

Great, thanks for the explanation.


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: dns-prefetch

2009-07-24 Thread Bil Corry
Jean-Marc Desperrier wrote on 7/24/2009 1:09 PM: 
 The most serious attack seem to me to be than the attacker can know
 *when* exactly you read any given mail.

I hadn't thought of that, but I do now see that as a reason to turn it off 
entirely for any messaging application.  You're right, it wouldn't be too hard 
to marry wildcard DNS with specially-crafted tracking links to know when the 
user has viewed the message (which is why many messaging applications disable 
remote image fetching by default).


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy updates

2009-07-23 Thread Bil Corry
Daniel Veditz wrote on 7/23/2009 10:32 AM: 
 Sid has updated the Content Security Policy spec to address some of the
 issues discussed here. https://wiki.mozilla.org/Security/CSP/Spec

Under Policy Refinements with a Multiply-Specified Header there is a 
misspelling of X-Content-SecurityPolicy.

And that section conflicts with what is said earlier in the document, 
specifically:

When multiple instances of the X-Content-SecurityPolicy HTTP header are 
present in an HTTP response, the intersection of the policies is enforced

vs.

If multiple X-Content-Security-Policy headers are present in the HTTP 
response, then the first one encountered is used and the rest are discarded.

and

Only the first X-Content-Security-Policy Response header received by the user 
agent will be considered; any additional X-Content-Security-Policy HTTP 
Response headers in the same response will be ignored.



- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy updates

2009-07-23 Thread Bil Corry
Sid Stamm wrote on 7/23/2009 11:41 AM: 
 On 7/23/09 9:36 AM, Bil Corry wrote:
 And that section conflicts with what is said earlier in the document, 
 specifically:
 When multiple instances of the X-Content-SecurityPolicy HTTP header are 
 present in an HTTP response, the intersection of the policies is enforced
 vs.
 If multiple X-Content-Security-Policy headers are present in the HTTP 
 response, then the first one encountered is used and the rest are discarded.
 and
 Only the first X-Content-Security-Policy Response header received by the 
 user agent will be considered; any additional X-Content-Security-Policy HTTP 
 Response headers in the same response will be ignored.
 Fixed.  Multiple header instances cause the policies to be intersected.
  This is more-or-less a replacement for meta tag support, which has been
 dropped.

There's still one sentence about it lingering under Activation and 
Enforcement that needs to be removed.

I think the section labeled Policy Refinements with a Multiply-Specified 
Header would be more clear if renamed to Policy Intersection with Multiple 
Headers or something similar.


- Bil


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


dns-prefetch

2009-07-23 Thread Bil Corry
In [1], it's mentioned that:

Furthermore, as a security measure, prefetching of embedded link hostnames is 
not done from documents loaded over https. If you want to allow it in that 
context too, just set the preference network.dns.disablePrefetchFromHTTPS to 
true.

Can someone explain the security concerns with DNS prefetching from a HTTPS 
site?


- Bil


[1] http://bitsup.blogspot.com/2008/11/dns-prefetching-for-firefox.html

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: dns-prefetch

2009-07-23 Thread Bil Corry
Wan-Teh Chang wrote on 7/23/2009 9:29 PM: 
 On Thu, Jul 23, 2009 at 7:10 PM, Bil Corryb...@corry.biz wrote:
 Can someone explain the security concerns with DNS prefetching from a HTTPS 
 site?
 
 The concern is privacy.  Prefetching DNS for host names referenced
 in an HTTPS page leaks some info contained in that page.

Thanks for the response.  Who is the data being leaked to?  The DNS provider?  
The advisory sniffing packets off a public hotspot?

And what information is being leaked?  The hostname(s) that are referenced on 
the HTTPS page?

I'm just trying to understand the complete risk involved.


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-16 Thread Bil Corry
Ian Hickson wrote on 7/16/2009 5:51 AM: 
 I think that this complexity, combined with the tendency for authors to 
 rely on features they think are solvign their problems, would actually 
 lead to authors writing policy files in what would externally appear to be 
 a random fashion, changing them until their sites worked, and would then 
 assume their site is safe. This would then likely make them _less_ 
 paranoid about XSS problems, which would further increase the possibility 
 of them being attacked, with a good chance of the policy not actually 
 being effective.

I think your point that CSP may be too complex and/or too much work for some 
developers is spot on.  Even getting developers to use something as simple as 
the Secure flag for cookies on HTTPS sites is still a challenge.  And if we 
can't get developers to use the Secure flag, the chances of getting sites 
configured with CSP is daunting at best.  More to my point, getting developers 
to use *any* security feature is daunting, so any solution to a security issue 
that doesn't involve protection by default is going to lack coverage, either 
due to lack of deployment, or misconfigured deployment.  And since protection 
by default (in this case) would mean broken web sites, we're left with an 
opt-in model that achieves only partial coverage.

At first glance, it may seem like a waste of time to implement CSP if the best 
we can achieve is only partial coverage, but instead of looking at it from the 
number of sites covered, look at it from the number of users covered.  If a 
large site such as Twitter were to implement it, that's millions of users 
protected that otherwise wouldn't be.



 I think CSP should be more consistent about what happens with multiple 
 policies. Right now, two headers will mean the second is ignored, and two 
 metas will mean the second is ignored; but a header and a meta will 
 cause the intersection to be used. Similarly, a header with both a policy 
 and a URL will cause the most restrictive mode to be used (and both 
 policies to be ignored), but a misplaced meta will cause no CSP to be 
 applied.

I agree.  There's been some discussion about removing meta support entirely 
and/or allowing multiple headers with a intersection algorithm, so depending on 
how those ideas are adopted, it makes sense to ensure consistency across the 
spec.



 I don't think UAs should advertise support for this feature in their HTTP 
 requests. Doing this for each feature doesn't scale. Also, browsers are 
 notoriously bad at claiming support accurately; since bugs will be present 
 whatever happens, servers are likely to need to do regular browser 
 sniffing anyway, even if support _is_ advertised. On the long term, all 
 browsers would support this, and during the transition period, browser 
 sniffing would be fine. (If we do add the advertisment, we can never 
 remove it, even if all browsers support it -- just like we can't remove 
 the Mozilla/4.0 part of every browser's UA string now.)

This is under discussion too; if you have an interest, here's the most recent 
thread where it's being discussed:

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/571f1495e6ccf822#anchor_1880c3647a49d3e7



- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - Relaxed Restrictions Mode(s)

2009-07-01 Thread Bil Corry
FunkyRes wrote on 7/1/2009 5:43 AM: 
 A library of function examples that do things cross platform in a
 fully CSP compliant way would be a godsend, and IMHO preferable to
 taking the easy way out and loosening up the enforcement.

I personally use jQuery to abstract the cross-platform issues:

http://jquery.com/


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-06-30 Thread Bil Corry
One option is to meet in the middle: by default the meta tag is disabled, but 
the hosting provider can enable it via the X-Content-Security-Policy header; 
that way those who want the risk of it can still choose to use it.

Otherwise, +1 for removing meta tag support.


- Bil


Brandon Sterne wrote on 6/30/2009 10:50 AM: 
 (copying the dev-security newsgroup)
 
 Hi Ignaz,
 
 Thanks for the feedback.  The spoofed security indicators from an
 injected CSP meta tag is a fair point and one I haven't thought of
 previously.  I'm not sure if browsers will implement such visual
 indicators for CSP because it may confuse users.  This is still a valid
 point, though, and we've struggled with the idea of meta tag policy
 from the beginning.  The idea is to enable sites which can't set headers
 to use CSP, but the reward might not be worth the risk.  In fact, Sid,
 one of the engineers implementing CSP has proposed removing this from
 the design:
 http://blog.sidstamm.com/2009/06/csp-with-or-without-meta.html
 
 If there are no major objections to doing so, it looks like you'll get
 your way :-)
 
 Cheers,
 Brandon
 
 
 ignazb wrote:
 Hello,

 I just read some of the documentation about CSP and I must say it
 looks promising. However, I think there are some flaws in the spec.
 -) I think it is a bad idea to allow the use of a meta tag for CSP
 policy-declaration. If, for example, you decided to show a symbol in
 the browser that indicates that the site is CSP secured, it would not
 be possible to tell whether the CSP policy comes from the server via a
 HTTP header or from an attacker who just injected it (unless, of
 course, you display where the CSP policy came from). So if a user
 visits a site and sees it is CSP secured (although an attacker
 inserted the tag allowing the execution of scripts from his site) she
 could decide to turn on JavaScript although the site is inherently
 unsafe.
 -) There should probably also be a way to restrict the contents of
 meta tags in a website. If, for example, an attacker inserts a meta
 for a HTTP redirect, he could redirect users to his own website, even
 with CSP enabled.

 -- Ignaz
 ___
 dev-security mailing list
 dev-security@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security
 


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy discussion (link)

2009-06-26 Thread Bil Corry
Sid Stamm wrote on 6/26/2009 11:44 AM: 
 Some discussion about CSP has recently popped up on the mozilla wiki:
 https://wiki.mozilla.org/Talk:Security/CSP/Spec
 
 I'm posting the link here in case anyone interested hasn't seen it yet.
  Comments are welcomed (both here and there).

It's been brought up this morning on the WASC Web Security list too:

http://www.webappsec.org/lists/websecurity/archive/2009-06/msg00086.html


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: XSRF via CSP policy-uri

2009-06-23 Thread Bil Corry
Serge van den Boom wrote on 6/23/2009 3:48 PM: 
 On 2009-06-23, Bil Corry b...@corry.biz wrote:
 Serge van den Boom wrote on 6/23/2009 8:13 AM: 
 However, by injecting an X-Content-Security-Policy header with the
 policy-uri set to the vulnerable URL, the web client can be tricked into
 visiting the vulnerable URL.
 It would only work for those pages where a X-Content-Security-Policy
 header has not already been set -- additional
 X-Content-Security-Policy headers are ignored.
 
 The injected header could be the first one though, with the genuine
 header being ignored.

True, but the attacker could simply split the header and issue a redirect to 
any page they desire and skip trying to exploit CSP entirely.


 But beyond that, the proposed Link header would provide the same
 attack surface, and can not be restricted to a known URI:
 
 I was not familiar with that proposal, but skimming through it, it
 appears that these links are not resolved automatically, making this
 header less interesting for attackers. The same goes for the standard
 Content-Location header.

Section 5 indicates it's semantically equivalent to the LINK element in 
HTML -- so presumably that means the browser will retrieve a stylesheet 
specified by the header before rendering the page.


- Bil


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Security Question: Tabs sharing session information, etc...

2009-05-17 Thread Bil Corry
Boris Zbarsky wrote on 5/16/2009 8:21 PM: 
Why haven't browsers (such as FireFox) isolated tabs/windows from
 each other such that I cannot simply replicate a logged-in user by
 simply pasting into another FF tab?
 
 For what it's worth, some sites do in fact prevent this (not sure which
 mechanism they use), and it's incredibly painful from a user perspective
 (opening links in new windows/tabs doesn't work properly, session
 history doesn't work properly, reloading doesn't work properly, etc, etc).

I've seen it done three ways, but none of them can prevent a user from 
right-clicking a link, then choosing Open in a new tab and having the 
selected page load.  But from there, one of the two tabs will stop working:


(1) A site enforces a same-origin policy by using referrer -- copying/pasting 
the current URL into a new tab means Firefox doesn't send the referrer header 
and the request is rejected by the server.  Note that right-clicking a link, 
then choosing Open in a new tab does send the referrer, so when done that 
way, it wouldn't be rejected.


(2) A site records the current page being viewed server-side (associated with 
the user's session), then uses it to enforce a site-flow policy.  For example, 
user is browsing on Tab 1, and can browse to Page A or Page B.  The user opens 
a new tab to the same page in Tab 2 via copy/paste.  In Tab 1, the user browses 
to Page A -- the server remembers the user is now on Page A.  Then in Tab 2, 
the user tries to browse to Page B, but because the server knows they're on 
Page A, and there isn't any way to browse to Page B from Page A, it rejects the 
request.


(3) A site employs secret link/form tokens that change on every page request.  
Think anti-XSRF secret tokens, but for all links and forms on every page.  For 
example, user requests Page A and is returned three links, all with the same 
secret token.  User then opens a second tab to the same page via copy/paste, 
but because it is a new request, the server generates a new secret token, and 
all three links on the second tab use the new secret token.  Back on the first 
tab, browsing any of the links will cause the server to reject the request 
because those secret tokens were expired when the user essentially reloaded the 
page.



- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-08 Thread Bil Corry
Gervase Markham wrote on 4/8/2009 2:07 PM: 
 On 07/04/09 18:02, Brandon Sterne wrote:
 I'm actually against making it easy for servers to detect if CSP is
 supported, because if we make it particularly easy, content authors will
 start relying on it as their only defence rather than using it as a
 backup. We don't need to check for XSS holes, we use CSP. That would
 be bad. Of course, we can't stop them putting together fragile
 User-Agent lists, but sites which do that are broken anyway, as the web
 design community has been saying for years.

It seems unlikely that responsible web developers would rely entirely on CSP, 
especially initially, since not all UAs will support it.  And if the developer 
really does choose to rely entirely on CSP, there isn't much we can do -- any 
developer with two domains can easily test if the client supports CSP, request 
header or no header.

I think the stronger likelihood is that the developer won't use CSP at all -- 
their site will still work regardless.  Providing a CSP header that can be 
measured to show it's worth the effort to learn and implement will be a much 
stronger incentive.

In summary, given the number of XSS holes out there, if the developer chooses 
to rely entirely on CSP to protect them, that's far better than not using CSP 
at all.  The biggest threat to CSP is not over-reliance, but rather 
under-utilization.


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Bil Corry
Gervase Markham wrote on 4/7/2009 6:07 AM: 
 On 07/04/09 07:36, Daniel Veditz wrote:
 Maybe this does point out the need for some kind of version number in
 the header, so future browsers can take appropriate action when
 encountering an old header. For example, assuming none for any newly
 added types.
 
 I much prefer forwardly-compatible designs to version numbers.

It has to work both ways; old CSP clients need to be able to parse new CSP 
rules that are unknown to them and new CSP clients need to be able to parse old 
CSP rules.  Where it will become a challenge is anytime something implicit has 
its meaning changed (e.g. the default is x in CSPv1 and y in CSPv2).


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Bil Corry
Brandon Sterne wrote on 4/7/2009 12:02 PM: 
 I looked at each of the HTTP Header Field Definitions and my preference
 for communicating the CSP version is to add a product token [1] to the
 User-Agent [2] string.  This would add only a few bytes to the U-A and
 it saves us the trouble of having to go through IETF processes of
 creating a new request header.

I agree that creating a request header for just the CSP version is overkill.  
However, I am concerned that privacy add-ons, proxies, firewalls, etc may strip 
or replace the User-Agent string.

I propose a new request header is created, but instead of one that is specific 
to CSP, it is something more generic that can be used in the future by similar 
policy frameworks.

For example:

Accept-Header: X-Content-Security-Policy version=2 securityLevel=2; 
X-Application-Boundaries-Enforcer type=browser 


FWIW, X-Application-Boundaries-Enforcer refers to ABE: 
http://hackademix.net/2008/12/20/introducing-abe/

I originally came up with Accept-Header during a conversation about revising 
the Cookie specification; it would alert the server that the client understood 
version 3 of cookies:

Accept-Header: Set-Cookie version=3

So it does have a variety of uses that may make it worth the effort to register 
and define.


- Bil


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy feedback

2009-01-12 Thread Bil Corry
Sid Stamm wrote on 1/12/2009 12:52 PM: 
 Or do we want phone-home features for CSP so the browser will
 automatically tell a site when its policy is violated?

It already has this feature, see #6:

http://people.mozilla.org/~bsterne/content-security-policy/details.html


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox extensions updated over plain HTTP (not HTTPS)

2009-01-04 Thread Bil Corry
Justin Dolske wrote on 1/4/2009 9:48 PM: 
 The update check, which happens over SSL, includes a hash in the reply.
 When the update is then downloaded (without SSL), the data is checked
 against the hash from the update check. If the data was tampered with,
 the hash won't match and the bad update won't be applied.

Which hash algorithm is used?


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: HTTPOnly cookies specification

2008-12-12 Thread Bil Corry
Gervase Markham wrote on 12/12/2008 11:23 AM: 
 Bil Corry wrote:
 There's a group of us working on creating a spec for HTTPOnly cookies. 
 
 This isn't being done by WHAT-WG, then?
 
 If you have an active interest in participating, our list is here:

  http://groups.google.com/group/ietf-httponly-wg
 
 Is this an official IETF group? It seems odd that its list is not on the
 IETF mailing list server.

We're not officially affiliated with any group, although the plan is to move it 
to IETF or work with Yngve to add it to his cookie draft.  If you're willing to 
get us added officially to a group, we'd love the help.


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy feedback

2008-12-03 Thread Bil Corry
Gervase Markham wrote on 12/3/2008 4:56 PM: 
 bsterne wrote:
 I think what Lucas is saying is that servers won't send policy to
 clients who don't announce that they support CSP.
 
 To save 60 bytes in a header?

No, so that in the event CSPv2 is incompatible with CSPv1, it won't require two 
response headers to be sent to every client.  Instead, since the browser tells 
the server which version of CSP it's accepting, the server can send back the 
CSP header in the most recent format that both the client and server understand 
(e.g. server knows CSPv2, client knows CSPv3, server sends CSPv2 header).


- Bil

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy feedback

2008-12-01 Thread Bil Corry
On Nov 22, 2:03 pm, Lucas Adamski [EMAIL PROTECTED] wrote:
 Yes, my understanding is that Access Control is actually intended as a
 generic cross-site server policy mechanism, and XHR is just its first
 implementation.

Anne confirmed that it's not intended to be XHR-only, however it's not
intended for all types of requests either.  He specifically said it
would not work for iframe due to cross-site scripting issues.


- Bil
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy feedback

2008-11-21 Thread Bil Corry
On Nov 20, 4:37 pm, bsterne [EMAIL PROTECTED] wrote:
 On Nov 17, 2:19 pm, Bil Corry [EMAIL PROTECTED] wrote:

  (1) Something that appears to be missing from the spec is a way for
  the browser to advertise to the server that it will support Content
  Security Policy, possibly with the CSP version.  By having the browser
  send an additional header, it allows the server to make decisions
  about the browser, such as limiting access to certain resources,
  denying access, redirecting to an alternate site that tries to
  mitigate using other techniques, etc.  Without the browser advertising
  if it will follow the CSP directives, one would have to test for
  browser compliance, much like how tests are done now for cookie and
  JavaScript support (maybe that isn't a bad thing?).

 This isn't a bad idea, as I have seen this sort of compatibility
 level used successfully elsewhere.  If future changes are made to the
 model which would define restrictions for new types of content (e.g.
 video), or which would affect the default behaviors for how content
 is allowed to load, then it will be useful to servers to have their
 clients' CSP version information.  If we are going to add this to the
 model, then we should do so from the beginning to avoid the
 potentially messy browser compliance testing that would result after
 the first set of changes.

I personally see value there for the website, but if 99.9% of websites
will never do anything with the header, then it probably isn't
worthwhile (or it may take version 2 before the need is evident).  The
big challenge here is making sure the CSP announcement header can not
be spoofed via XHR, so to that end, I'd recommend prefixing the header
name with Sec- such as Sec-Content-Security-Policy -- the latest
draft of XHR2 specifies that any header beginning with Sec- is not
allowed to be overwritten with setRequestHeader():

http://www.w3.org/TR/XMLHttpRequest2/#setrequestheader

Of course, XHR2 would have to be implemented in the browsers first in
order to take advantage of the requirement.


  (2) Currently the spec allows/denies based on the host name, it might
  be worthwhile to allow limiting it to a specific path as well.  For
  example, say you use Google's custom search engine, one way to
  implement it is to use a script that sits 
  onwww.google.com(e.g.http://www.google.com/coop/cse/brand?form=cse-search-box〈=en).
  By having an allowed path, you could prevent loading other scripts
  from thewww.google.comdomain.

 I don't have a strong opinion on this one.  My initial reaction is
 that it adds complexity to the model, but perhaps complexity that's
 warranted if people feel it's a useful feature.  Do you have some
 specific use cases to share which would demonstrate the usefulness of
 your suggestion?

I don't have a specific use case, I'm thinking more of the edge cases
where content is allowed from a domain that allows a multitude of
third-party content.  Maybe this is something to explore for v2 if
warranted.


  (3) Currently the spec focuses on the host items -- has any thought
  be given to allowing CSP to extend to sites being referenced by host
  items?  That is, allowing a site to specify that it can't be embedded
  on another site via frame or object, etc?  I imagine it would be
  similar to the Access Control for XS-XHR[2].

 I would agree with Gerv, that this feels a bit out of scope for this
 particular proposal.

Then maybe something to consider down the road.  It would be useful to
prevent hot linking and clickjacking
.

- Bil
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security