dev-security list is coming to an end

2021-03-24 Thread Daniel Veditz
Mozilla's "mailman" servers, and the crazy mail/NNTP/WebForums mirroring,
are being decommissioned soon and this list will no longer work. Most such
lists are being migrated to https://discourse.mozilla.org for web-based
forums, but as there are already discussions on this topic there this list
is not being migrated as-is.

To continue Mozilla-related security discussions please continue on at

"web" forums:
https://discourse.mozilla.org/c/security/
https://discourse.mozilla.org/tags/c/firefox-development/privacy-and-security

"chat" on Matrix
https://chat.mozilla.org/#/room/#security:mozilla.org
(there are also dedicated desktop and mobile matrix clients that will work)

Thank you,
-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Support for EdDSA

2021-03-24 Thread Daniel Veditz
For NSS-specific development questions you should try in the
dev-tech-crypto mailing list, or the matrix chat at
https://chat.mozilla.org/#/room/#nss:mozilla.org

-Dan Veditz

On Thu, Mar 18, 2021 at 4:50 AM Rishabh Kumar via dev-security <
dev-security@lists.mozilla.org> wrote:

> Hello All,
>
> I am working on a GSOC proposal for Libreswan in which the authentication
> mechanism is extended to support EdDSA. Since the Libreswan is dependent on
> NSS for algorithm implementation it is necessary that NSS supports EdDSA.
>
> Does anybody  have any idea regarding the plans for adding EdDSA in the NSS
> library. I would be happy to contribute if you can describe the
> requirements for implementing EdDSA.
>
> Regards,
>
>
> --
> Rishabh Kumar
> M.Tech(RA)
> Department of Computer Science and Technology
> Indian Institute of Technology, Hyderabad
> E-mail ID:- cs19mtech11...@iith.ac.in
>
> --
>
>
> Disclaimer:- This footer text is to convey that this email is sent by one
> of the users of IITH. So, do not mark it as SPAM.
> ___
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Fwd: Misissued certificates - pathLenConstraint with CA:FALSE

2017-08-10 Thread Daniel Veditz via dev-security-policy
Forwarding to the right (cert-related) group


 Forwarded Message 
Subject: Misissued certificates - pathLenConstraint with CA:FALSE
Date: Wed, 9 Aug 2017 19:25:31 -0400
From: Alex Gaynor 
To: helpd...@identrust.com, dev-secur...@lists.mozilla.org


Hi,

The following certificates appear to be misissued:

https://crt.sh/?id=77893170=cablint
https://crt.sh/?id=77947625=cablint
https://crt.sh/?id=78102129=cablint
https://crt.sh/?id=92235995=cablint
https://crt.sh/?id=92235998=cablint

All of these certificates have a pathLenConstraint value with CA:FALSE,
this violates 4.2.1.9 of RFC 5280: CAs MUST NOT include the
pathLenConstraint field unless the cA boolean is asserted and the key usage
extension asserts the keyCertSign bit.

Alex

-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: D1B3 ADC0 E023 8CA6
___
dev-security mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Daniel Veditz
On 6/30/16 8:30 AM, Rob Stradling wrote:
> https://www.computest.nl/blog/startencrypt-considered-harmful-today/
> 
> Eddy, is this report correct?  Are you planning to post a public
> incident report?

Does StartCom honor CAA?

Does StartCom publish to CT logs?

How many mis-issued certs were obtained by the researchers? Has there
been an investigation to see if there were similarly mis-issued certs
prior to this report?

Have those certs been revoked?

-Dan Veditz
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Tightening up after the Lenovo and Comodo MITM certificates.

2015-02-24 Thread Daniel Veditz
On 2/23/15 3:55 PM, Richard Barnes wrote:
 If I understand correctly (dveditz CC'ed to correct me), the current add-on
 signing tool has a provision for signing add-ons that are not published
 through AMO.  They still need to be submitted to AMO to be scanned and
 signed, but they're not published.

Yes.

 I think the benefit here would be more transparency than quality.  If we
 only allowed changes to the root store by signed add-ons, then (1) Mozilla
 would at least have internal visibility into all the MitM roots being
 deployed, and (2) we could use the add-on blacklist facility to block
 things like Superfish once they were detected.  These both seem beneficial
 in terms of mitigating risk due to MitM.

I don't think we can restrict it to add-ons since external programs like
Superfish (and the Lenovo removal tool, for that matter) write directly
into the NSS profile database. It would be a bunch of work for precisely
zero win.

Could we make the real and only root accepted by Firefox be a Mozilla
root, which cross-signs all the built-in NSS roots as well as any
corporate roots submitted via this kind of program? I thought pkix gave
us those kinds of abilities.

Or we could reject any added root that wasn't logged in CT, and then put
a scanner on the logs looking for self-signed CA=true certs. Of course
that puts the logs in the crosshairs for spam and DOS attacks.

 However, it could be challenging to implement this control.  In addition to
 the in-browser UI for adding roots (which could easily be disabled), certs
 can also be added to the NSS databases directly, even while the browser
 isn't active.  To counter this risk, we would have to periodically snapshot
 the database and check that nothing else had changed it.

We have this problem with add-ons which can also be added directly while
Firefox isn't running. Where do you store the snapshot such that the
injector can't just tweak it while injecting?

 if we can reinforce the idea that addons are the way to install roots by
 simply turning off the UI that exists, that could be beneficial.

Would it? I haven't heard of any widespread problems with people fooling
others into installing a root via the UI. Meanwhile it would do zero to
stop the actual way unwanted roots get added.

 (Also, this is more of a Firefox discussion than a CA program discussion,
 so it might be more appropriate for dev.tech.crypto.)

It's not a technical crypto issue either; I suggest something more
general like dev-security or dev-firefox.

-Dan Veditz
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New wiki page on certificate revocation plans

2014-08-07 Thread Daniel Veditz
On 8/4/2014 10:16 AM, Erwann Abalea wrote:
 I imagine you have access to more detailed information (OCSP URL,
 date/time, user location, ...), could some of it be open?

All of our telemetry data is open as far as I know. Because of privacy
concerns we only collect aggregate stats from users, nothing as specific
as URLs. Here's an example of the kind of data Patrick was using:

http://telemetry.mozilla.org/#filter=release%2F31%2FCERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIMEaggregates=multiselect-all!Submissions!Mean!5th%20percentile!25th%20percentile!median!75th%20percentile!95th%20percentileevoOver=Buildslocked=truesanitize=truerenderhistogram=Graph

(or http://mzl.la/1qXOu8u )

Feel free to play with the many options for release and type of data
gathered.

You can look at your own local data at about:telemetry (the
cert_validation items are in the Histograms section).

-Dan Veditz
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Convergence.

2014-04-15 Thread Daniel Veditz

On 4/15/2014 7:43 AM, nobody wrote:

I just wondered... what is the pull back regarding Convergence to put it in
the webbrowsers by default?


The main issue is who are the notaries? If they're simply reflecting 
back Yup, I see this valid CA cert then they aren't adding a whole lot 
of value for the amount of risk they introduce, and if they're making 
their own judgement about the validity of the certificates on some other 
ground they just become a type of Certificate Authority themselves. Who 
pays for that infrastructure, and what is their motive?


Firefox and Chrome are both working on implementing key pinning (and 
participating in the standardization process for it) which won't free 
us from the CA system but will at least ameliorate one of the worst 
aspects which is that any two-bit CA anywhere in the world can issue a 
certificate for any site, anywhere.


The IETF is working on standardizing Certificate Transparency, Chrome 
is implementing it, and at least one CA is participating. This again 
doesn't free us from the CA system, but it does make the public 
certificates auditable so that mis-issuance could theoretically be detected.



Or I hack the router you
 use to access the internet... all of the notaries you try to talk to I
redirect to me. I say every site is
 valid regardless if it is or not. How is this more secure?


I haven't looked at the technical details of convergence but presumably 
it requires a secure connection to the notary or better that the notary 
responses are signed by the notary. If the communication with the notary 
is unreliable then it's no help at all.


The main practical problems with convergence are that it introduces a 
dependency on traffic to a 3rd party which hurts privacy, reliability, 
and performance. These are similar to the problems we have today with 
OCSP revocation checking.


-Dan Veditz
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Convergence.

2014-04-15 Thread Daniel Veditz

On 4/15/2014 6:16 PM, Man Ho (Certizen) wrote:


On 4/16/2014 12:08 AM, Daniel Veditz wrote:

The main practical problems with convergence are that it introduces a
dependency on traffic to a 3rd party which hurts privacy, reliability,
and performance.

The same problem applies to Certificate Transparency too, but not to
OCSP revocation checking.


OCSP as found in the wild is terrible on all three points which is why 
Google Chrome is dropping support. It works most of the time, except 
when you have an attacker in position to perform a MITM attack at which 
point the attacker can block the OCSP request (reliability). As you surf 
around the web you are telling the CAs (by pinging their OCSP responder) 
that you exist and are visiting a particular site (privacy). If browsers 
block page load waiting for the OCSP responder then you've just slowed 
down the loading of your site (performance) and if they don't they they 
allow you to connect to a bad site and then retroactively try to tell 
you it's bad (reliability again).


OCSP stapling resolves most of these issues but it's not broadly used.

Certificate Transparency essentially requires stapling (or an equivalent 
mechanism) so that there's no need to make an inline request to a log.

http://www.certificate-transparency.org/faq#TOC-What-is-an-SCT-

-Dan Veditz
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Scope of Dev-Security List

2013-10-08 Thread Daniel Veditz
On 10/5/2013 4:00 PM, farr...@furcadia.com wrote:
 Well, I was very disappointed not to find any discussion here of the
 issues and challenges that W3C's decision on DRM for HTML5 would
 bring for security.
 
 I sort of expected people to be all over that here, when I saw the
 group name.

The issues around it are not really security issues, it's a policy and
philosophical argument. There will no doubt be unintentional security
bugs in the external DRM plugins--as there is with just about every new
feature we do--but the concern with DRM is that as the EFF (or someone
like that) puts it it is broken by design.

The policy issues go well beyond the limited audience of this security
list and are being discussed elsewhere. I'd imagine either the web
platform group or the governance group (or both).

 DRM in HTML5 - *if* the decision is made to back it - almost
 certainly means: - The end of verifiably-secure, open-source FF. -
 The end of FF as part of security projects like TOR, TAILS, etc,
 since if FF uses closed binary blobs then it cannot be trusted.

Firefox will never require the use of closed binary blobs. Since the
code is licensed to be compatible with GPL it simply must have the
ability to function without such things. I expect that if we implement
this it would be functionally much like plugins are today -- optional,
but without it you lose access to content that requires it.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Restricting privileged internal pages from chrome or about URIs with Content Security Policy

2013-09-19 Thread Daniel Veditz

On 9/17/2013 9:38 AM, Frederik Braun wrote:

There were and probably will be XSS bugs in some of parts of our browser
part that is heavily using HTML and JavaScript.


There have been since the beginning of Firefox. Chrome XSS is about the 
worst bugs because the attackers don't have to mess with shellcode and 
they're always 100% reliable, unlike the typical memory corruption exploit.



The only question that remains, is how hard is it to apply a CSP to
non-HTTP documents and XUL documents (like about:newtab)?


At the moment, hard; trivial once we support the CSP 1.1 meta tag 
feature. Well, actually, adding the CSP policies isn't going to be the 
hard part, fixing up all the pages will take a lot of work.


It'd be safer to automatically impose a policy but that would break so 
many add-ons that it would take great political will to make that kind 
of change even if we let add-ons opt-out of the imposition.


-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Request for feedback on crypto privacy protections of geolocation data

2013-09-10 Thread Daniel Veditz
On 9/10/2013 3:46 AM, Gervase Markham wrote:
 On 10/09/13 00:25, R. Jason Cronk wrote:
 Does this give Mozilla the
 ability to historically track me if I move my device? 
 
 Yes; this is why publishing the full raw stumbled data sets is sadly
 going to be not possible.

Why would we have two locations for the same AP? In fact, given the
schema Chris outlined (1:1 mapping H(Mac+SSID) = location) I don't see
how we even could.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Request for feedback on crypto privacy protections of geolocation data

2013-09-10 Thread Daniel Veditz
On 9/10/2013 10:09 AM, Hanno Schlichting wrote:
 As of this moment, we filter out any AP that has been detected in two
 different places (where different means more than ~1km away from each
 other). This is very conservative approach and we'll relax that
 later.

What do you mean by filtered out? How are you tracking that it's now
been seen in multiple locations? Given the simple storage schema at the
top of the thread your choices seem limited to a) ignore the new
location info, or b) throw out the old location info. a) means no one
can ever move, and b) means the next time you see the new location that
becomes the location... over and over as it moves around.

That can't be right, so your database must be more complex. If you're
storing more than originally implied that may have some impact on a
security assessment.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Security concerns for a browser support API

2013-09-07 Thread Daniel Veditz
On 9/4/2013 11:05 AM, Monica Chew wrote:
 Looking through http://www.mozilla.org/en-US/firefox/releases/ I see
 several minor release fixes that involve flipping a pref (like
 18.0.1). How much easier would our lives be if we didn't have to
 chemspill for preference changes?

We don't have to chemspill to flip a pref, that's what the hotfix
addon mechanism is for. 18.0.1 involved code-level changes, too

https://hg.mozilla.org/releases/mozilla-release/pushloghtml?fromchange=8efe34fa2289tochange=eecd28b7ba09

 Given that many users accept updates very slowly, if at all
 (http://en.wikipedia.org/wiki/Template:Firefox_usage_share) it makes
 sense to have a generic preference API and eliminate as many of these
 minor releases, some of which could be security related, as possible.
 Having a browser support API for preferences probably makes Firefox
 more secure, rather than less secure.

I can't think of a single in-the-wild attack on Firefox that could have
been fixed by flipping a pref, unless you count disabling Javascript
entirely which people wouldn't accept.

Few of the minor releases have had security fixes, most recently have
been due to stability or incompatibility (web breakage) issues. Since
the first ESR release a third of minor updates have contained security
fixes, or 1 out of 7 since the last ESR.

* 10.0.1
* 10.0.2
  13.0.1
  14.0.1
  15.0.1
* 16.0.1
* 16.0.2
  17.0.1
  18.0.1
  18.0.2
  19.0.1
* 19.0.2  (Pwn2Own)
  20.0.1
  23.0.1

There's no API you could come up with that would have fixed those
(unless your API is an updater, which we have).

 To Kevin's point about hardcoding known badd addons, I don't think
 that works given the length of the release cycle (up to 4 months to
 get from Nightly to stable), and that would be much less responsive
 than the blocklist ping already offers.

I agree, even granting that a list of known bad addons is likely to be
considered safe enough to be uplifted to Beta right away.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [webdev] Why not CORS:*?

2013-08-27 Thread Daniel Veditz
On 8/26/2013 6:27 PM, Matt Basta wrote:
 It should be noted that CORSing without discretion can render CSRF 
 protection completely useless.

Without discretion is echoing back whatever you find in the Origin:
header and adding Access-Control-Allow-Credentials: true indiscriminately.

Access-Control-Allow-Origin: * is a special case that cannot be combined
with Access-Control-Allow-Credentials--browsers will ignore the
allow-credentials even if your site adds it. Attack sites can still make
all the simple (GET, POST) requests that were possible before CORS was
invented so if your site has a CSRF problem under those circumstances
then you have no CSRF protection at all. With a * response a foreign
site isn't allowed to read the responses or make non-simple requests
unless they had explicitly dropped credentials.

 If your site adds the Access-Control-Allow-Credentials header, 
 malicious sites can detect whether a user is logged in,

Attackers can generally tell whether users are logged in with or without
CORS via timing attacks.

 For APIs, though, this generally isn't an issue (and who uses cookies
 with their API anyway?)

https://wiki.mozilla.org/Bugzilla:REST_API for one.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [webdev] Why not CORS:*?

2013-08-26 Thread Daniel Veditz
Not sure why you've cc'd our vulnerability reporting address; did you
mean dev-security@lists.mozilla.org instead?

On 8/26/2013 2:10 PM, Peter Bengtsson wrote:
 So, when you know that your URL does not potentially trigger any
 sensitive changes without the user being explicitly aware of it, then * it.
 
 I like the simplicity of that.

CORS: * is always safe for a public site, or at least as safe as your
application is for users of pre-CORS browsers. (maybe not so great for
intranet sites.)




smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [webdev] Why not CORS:*?

2013-08-26 Thread Daniel Veditz
On 8/26/2013 5:52 PM, Daniel Veditz wrote:
 CORS: * is always safe for a public site, or at least as safe as your
 application is for users of pre-CORS browsers. (maybe not so great for
 intranet sites.)

Meant to include a link to the authoritative blog on the subject:
http://annevankesteren.nl/2012/12/cors-101

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: War on Mixed Content - Why?

2013-08-15 Thread Daniel Veditz
On 8/15/2013 11:21 AM, ianG wrote:
 On 15/08/13 13:22 PM, Mikko Rantalainen wrote:
 Why not issue a new signature every day and be done with
 broken revocation lists?)
 
 You'll upset people if you start talking like that :)

Not really, it's a serious proposal for dealing with the revocation problem

http://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-shortlived.html
http://www.ietf.org/mail-archive/web/pkix/current/msg30348.html

Rivest proposed short-lived certs as a way to get rid of CRLs back in
1998, but proposed that it was up to the acceptor of the cert to
decide how fresh was good enough, not the CA
http://people.csail.mit.edu/rivest/pubs/Riv98b.prepub.pdf

It's even been discussed some at the CABrowser forum.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed-content XHR Websockets

2013-07-26 Thread Daniel Veditz
On 7/23/2013 6:34 AM, Nicholas Wilson wrote:
 I think having uniformity here is clearly helpful. I do recognise that
 the WebSocket API spec requires mixed-content connections to be
 blocked, but there might still be room for discussion on the benefits
 of it, especially while you're adjusting the model for XHR at the same
 time.
 
 Without this change, we're going to have to release our product with
 long polling in Firefox just because we can't create a WebSocket!

Uniformity is indeed important. Are you implying that some other browser
is NOT blocking mixed-content WebSockets? Why is it only Firefox where
you have to do long polling?

If so we can take that information back to the standards body and
discuss changing the spec (which is probably where this conversation
should happen).

 The second request is a bigger discussion: I think we need a fuller,
 proper way to allow mixed-content XHR/WebSocket. Not all mixed-content
 requests, just some. I recognise the value that browsers provide to
 website developers flagging up when their site is misconfigured, and
 we want these warnings to be on by default.

I do want to recognize this as a separate point and don't want it to get
buried. Leaving WebSockets aside is it appropriate to treat data
connections as active content rather than passive content. What do
IE and Chrome (who both block mixed active content) do with XHR?

 I tentatively suggest a new header: Access-Control-Security:
 externally-verifiable (or anything similar).

That's worth considering.

 On a related note, in #662692, Brian Smith said IMO, the SSH example
 is better off using mozTCPSocket I'm not sure about that; while it's
 clearly great for chrome-context code or browser extensions, I still
 don't see how Mozilla could open this up to the webpages more
 generally.

I don't either. AFAIK the mozTCPSocket API is going to be restricted to
chrome-privileged add-ons and Privileged Firefox OS apps. It's not a
solution for web pages, WebSockets was supposed to be that solution.

-Dan Veditz



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox behavior with CDPs and AIAs

2013-04-12 Thread Daniel Veditz

On 4/11/2013 5:12 PM, Camilo Viecco wrote:

It is possible (but not supported) to use have FF download the CRLs specified
by the certificate.

There are (of course) many caveats:


Which is why we don't support it.


6. There will be a non-trivial performance hit (specially network bases) as 
some CRLs
   are 500k and these entries are not cached across sessions (no peristent 
cache). This
   might not be an issue if you have good network connections (no mobile).


yes, the biggie: even though CRLs are valid for a quite a while we don't 
cache them across restarts. Maybe not so bad if you never shut down your 
browser



  bool pref: security.use_libpkix_verification: true  //enables alt 
verification lib


Yes, note that CRL download support is only available as part of the 
not-yet-supported libpkix verification path. It's quite a bit bigger 
than just CRL downloads, this uses a completely different library to 
verify certificates. Libpkix is not entirely untested: Firefox uses it 
for EV certs, and Chrome uses it for everything. But last time I looked 
into it (months ago) there were bugs that were deemed bad enough that we 
weren't ready to turn it on in Firefox.



  bool pref: security.fresh_revocation_info.require : true // revocation info 
mandatory in libpkix only


How does this interact with the security.ocsp.require pref? Do they 
conflict? Play well together? Or simply unrelated, one applying to the 
old path and one to libpkix?


-Dan Veditz

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox behavior with CDPs and AIAs

2013-04-11 Thread Daniel Veditz

On 4/11/2013 1:26 PM, Rick Andrews wrote:

Sid Stamm suggested dev.security...


-Original Message-
From: Ian Melven [mailto:imel...@mozilla.com]

you might also try asking this on mozilla.dev.tech.crypto :)


Sid was wrong :-) The guys who know the technical guts of our crypto 
implementation are over in m.d.tech.crypto


AFAIK we do not download CRLs based on certs, but will update CRLs the 
user has manually specified. We've talked about improving CRL handling 
as part of a comprehensive reform of revocation checking but have yet to 
solve the performance and space requirements of CRLs.


We do support OCSP and are in the process of adding support for OCSP 
stapling to improve performance, security, and privacy. Lack of an OCSP 
response is not fatal however, because in general OCSP has not been 
reliable enough for that. However, cautious users can change the mozilla 
pref security.OCSP.require to true if they wish the lack of an OCSP 
reponse to be fatal.


For anything more detailed (timelines, bug numbers) you'll need to go 
bug the .tech.crypto folks.


-Dan Veditz

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Type Dependencies

2013-04-09 Thread Daniel Veditz

On 4/5/2013 5:26 PM, jeremy.ral...@gmx.ch wrote:

a comment in nsIContentPolicy.idl says

/* When adding new content types, [...]

My first thought was that it should be enough to look for all files
that #include nsIContentPolicy.h und rebuild the specific subtrees,


Note that nsIContentPolicy is a public contract and that lots of add-ons 
use it -- you can never rebuild enough to know you haven't broken 
anything. If Gecko starts doing some new behavior and we add a type it's 
less likely to break these other uses, but if you create a new type to 
cover part of an existing category you're more likely to create problems.


Not that it can't be done--we do it every once in a while--but you will 
need to reach out and announce the impending changes more broadly than 
the typical isolated new feature.


-Dan Veditz

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: ssl error

2012-08-03 Thread Daniel Veditz
I don't get a cert error when following either of those links.
According to DNS the firebug blog is at the same address as the
website, but I get a completely different (and correct) cert when I
go to the blog.

Are you still seeing the problem? Both of the sites you list are in
our PHX data center and that's been having some problems lately.
Maybe it was transient? If you are still seeing the problem what IP
addresses do you get for the sites? I get 63.245.217.58 and
63.245.217.86

-Dan Veditz

On 8/1/12 5:22 PM, Gus Richter wrote:
 Firefox advises to Get me out of here! regarding many secure
 (https) Mozilla web pages (Blogs only?) and further advising that it
 could mean that someone is trying to impersonate the site, and you
 shouldn't continue and that an invalid security certificate is used.
 
 In this last case,
 https://blog.getfirebug.com/2012/07/13/firebug-1-10-0/  this
 *warning* is reported:
 
   The certificate is only valid for the following names:
 www.getfirebug.com , getfirebug.com
 
   (Error code: ssl_error_bad_cert_domain)
 
 You would think that someone would have noticed this before (in this
 and many other cases) and correct the situation.
 
 https://blog.lizardwrangler.com/2012/07/27/importance-of-real-time/
 is another example.
 
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: IDN TLD whitelist, and .com

2012-07-05 Thread Daniel Veditz
On 7/5/12 1:37 AM, Gervase Markham wrote:
 Recently, it was decided that a whitelist was not scalable in the face
 of hundreds of new TLDs, and that we had to come up with a new approach.
 We did, based on some suggestions from the Unicode Consortium:
 
 https://wiki.mozilla.org/IDN_Display_Algorithm

Big thanks to you and Simon Montagu for driving this forward!

Given that the new criteria are not as strict as our old policy, why
would we want to preserve the old whitelist system in parallel? The
big flaw in the whitelist policy was that a registrar's policy
applied to the domain labels directly issued by that registrar but
not to any sub-domains created by the domain owner. Those
sub-domains could be as many levels deep and as spoofy as they'd
like. The new algorithm, in contrast, applies to each label
separately and will prevent spoofy sub-domains.

If the stated policies of the currently whitelisted TLDs fall within
the new algorithm let's just scrap the whitelist. Even if some small
percentage of edge-case domains end up being flipped to punycode the
code and policy simplification on our end will be worth it.

If there were any such edge-case domains would they be shown as IDN
in any of the other browsers (besides Opera who uses the same
whitelist mechanism)?

 Now, they have applied (for .com, .net and .name), and
 their current policies do meet the new criteria:
 https://bugzilla.mozilla.org/show_bug.cgi?id=770877

What's the time-frame on the new IDN algorithm? Sounds relatively
close so why not let them just start working when that lands instead
of whitelisting them?

 However, given that it was a .com domain which started all this fuss, I
 thought it was worth posting publicly in case anyone had any comments.

Have they revoked all the previously spoofing domains? Have they
audited all their existing domains to make sure there aren't
additional ones in there that violate their new rules? What is their
transition plan for the domains that do exist?

Their new rules going forward sound fine, it's any grand-fathered
mess I'm worried about. I'm especially worried if you proceed with
your currently stated plan of preserving the whitelist even after
the new algorithm lands.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: New MITM cert incident - Cyberoam

2012-07-04 Thread Daniel Veditz
On 7/4/12 10:34 AM, John Nagle wrote:
   A CA called Cyberoam appears to have issued a wildcard cert to
 enable MITM attacks for deep packet inspection [...]
 
   They're not a CA trusted by Mozilla, apparently.

They're not a CA. Businesses wishing to use the Cyberoam devices
need to install the Cyberoam self-issued CA-cert on each computer on
the network. Enterprises could either push the cert to everyone if
they have that kind of tool, or require that workers voluntarily
install it themselves (because otherwise you aren't able to reach
the internet).

If we implement cert pinning we'll either have to allow that kind of
business to disable it, or write off our users who work for
companies with that kind of control freakery. It's more common than
you'd think, some of our own Mozilla community members work for
companies with that kind of policy.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Implications of new TLDs

2012-06-25 Thread Daniel Veditz
On 6/22/12 2:39 AM, Gervase Markham wrote:
 I suspect few businesses use existing real TLDs as their
 external subdomains. However, I also suspect quite a few use
 future TLDs! [...] And I bet a load use .corp (6
 applicants) and .inc (11 applicants).

We've already seen confusion with some organizations using .int
(internal) and the gTLD .int (intergovernmental) leading to
inappropriate SSL certificates being issued.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


test dev-security mail

2012-05-16 Thread Daniel Veditz
Mailman was down yesterday and some of the lists didn't come back.
If you're seeing this mail then dev-security is working.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Fixing SSL quickly (not), or why certs need business identity data.

2012-04-04 Thread Daniel Veditz
On 3/30/12 6:47 AM, ianG wrote:
 I've been asking them for years to add the CA's name to the chrome. 
 But they still don't.  Thus totally mangling any concept of who is
 saying what to whom when why whether.

Most of the time I think that'd be a good idea, too. We sort of have
it, but not at a glance -- you have to hover over the identity
button or actually click it to see the CA.

If we did, though, which CA do we use? There's no space to show the
whole chain so our choices are the immediate EE issuer (which could
be faked if some other CA were badly compromised) or the root which
may not be all that useful. Of the two I'd pick the root as most
reliable

It wouldn't be that meaningful to most users (what is it? am I
expected to remember what it says on each site? should I worry if it
changes? sometimes sites do change legitimately so how do I tell?)
which is why our UX designers have kept it out of sight.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Fixing SSL quickly (not), or why certs need business identity data.

2012-04-04 Thread Daniel Veditz
On 3/30/12 4:20 PM, Kevin Chadwick wrote:
 Perhaps even a mozilla CA, a Google CA, and a Microsoft CA with an
 attacker having to compromise all three to be successful.

Kai proposed something somewhat along those lines, with the browser
vendors (possibly among others) being Vouching Authorities (VA).

   Mutually Endorsing CA Infrastructure
   https://kuix.de/mecai/

A better forum for CA alternative discussions would be the IETF's
therightkey mailing list. Mozilla can't create an alternative on
its own, it will take cooperation and coordination with lots of others.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Man-in-the-browser malware

2012-02-22 Thread Daniel Veditz
On 2/22/12 10:47 AM, Florian Weimer wrote:
 Another number of fascination: a good set of false Id costs o(1000).
 
 I find it hard to believe that false passports and driving licenses
 cost more than real ones.  Why should they?

There's probably a significant price difference between good ones
and good enough to get an under-aged drink ones. The latter are
quite cheap.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Is Mozilla actively working to introduce CAless TLS public key checking?

2012-01-12 Thread Daniel Veditz
On 1/12/12 12:10 AM, Henri Sivonen wrote:
 Is Mozilla actively working on a TLS public key checking system that
 has real trust agility (not DNSsec!) and that doesn't require CAs to
 work (but that can work in parallel with the CA system)?

Not actively, no. It's too early to determine if that's a fruitful
approach to build it into Firefox. We should, however, make some
changes so it's easier to develop experiments like Convergence.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: RES: Programmatically find the current Firefox profile folder

2011-09-09 Thread Daniel Veditz
On 9/6/11 8:46 AM, Walter do Valle wrote:
 I know that script. However, as I said in my original e-mail, it fell in
 some security issues. It's not possible to call this script without
 requesting some privileges. And that request generates an ugly dialog box
 asking user about permission. I need something more transparent.
 I assume that: if my Java applet is digitally signed and user agrees running
 it, it should call any javascript without requesting additional permission. 
 Is there any other way?

Isn't that effectively the same thing since both ask the user to
grant permission? Is there more difference than the aesthetics of
the permission dialog? Anyway, we've deprecated enablePrivilege and
will remove it in the future so ultimately that's not an option.

Finding the profile directory is a potentially dangerous thing. The
path itself might leak personal information, and not knowing the
profile directory is sometimes the only thing standing in the way
between stealing your passwords/bookmarks/cookies or not should
someone find a security bug that allows file stealing. Therefore
it's always going to involve some kind of checking with the user.

Java applets granted permissions can do a lot of awesome things on
their own, but if they call back into the page the scripts in that
page are still going to be limited by the permissions of the page.
You could try to find the user's profile yourself in Java by writing
platform-specific file code to emulate the Firefox code, but if the
user has multiple profiles or has picked a non-standard location
you're pretty stuck -- back to asking again.

Your best bet, honestly, is to ask users to install a restart-less
add-on (search for Jetpack or Add-on SDK at
developer.mozilla.org). Installing the add-on is about as much
bother for the user as the Java or JavaScript permission dialogs,
and users already understand the concept of installing and won't get
confused as they might with the unfamiliar permission dialogs. And
they know how to later revoke the permission (uninstall) as opposed
to a remembered choice on a JavaScript/Java prompt should they need
to do that.

Once you have an add-on you have all the privileges you need (so
your code has to be careful, of course!), including easy access to
the user's PKCS11 modules. This will be far easier than trying to
extract the certs from the database from external Java, and more
robust as the certificate store changes file name and/or format in
the future. Will even handle the case where the user's certificates
are on a smartcard which your java file-reading approach won't.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: OCSP Tracking

2011-09-08 Thread Daniel Veditz
On 9/6/11 11:07 AM, Devdatta Akhawe wrote:
 Sure. But I think users would be very surprised to find that every
 time they visit a SSL site, some server somewhere is noting down what
 site they visited, and when.

Yes, OCSP supposedly traded off a little privacy for immediacy over
CRLs. Except that many OCSP deployments in practice just respond
based on the data in the CRL.

The response is currently cached in memory so there's not currently
a Private Browsing mode concern (PB is not anonymous browsing,
we're just trying not to leave local traces). The OCSP server won't
know about every load, but probably could track your first visit of
the day. In order to improve reliability and privacy, however, we'd
like to implement on-disk caching of OCSP responses which does mean
we'll have to be careful around private browsing mode.

We'd also like to implement OCSP stapling, where the website
you're visiting returns a recent signed OCSP response along with its
certificate. This improves privacy for users, reduces load on CA
infrastructure (the server checks OCSP once every few minutes rather
than every visitor checking), and improves the loading speed of the
website (browsers can just load the page, don't have to wait for a
separate OCSP response). The users privacy depends on the web server
implementing this, but there are some benefits to the server so we
have hope.

Another thing we'd like to do is prefer local fresh CRL data over
OCSP rather than strictly doing OCSP checks. With CRL a CA only
knows you're interested in some website they've issued a cert for,
not which one (well, a slimy CA could use a separate CRL per cert
but that would be noticed). The idea would be something like
 - if we already have the CRL and it's not stale use it
 - if CRL is missing or stale
  + use OCSP
  + also start downloading the CRL (which can be large)
for next time and for other sites using that CA

If we're checking in this manner and have stapling implemented we
probably have enough revocation reliability to be able to HARD FAIL
connections without an explicit revocation response. With that kind
of redundancy if we can't get a response you're probably under attack.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Hulu using ETags to circumvent privacy protection methods

2011-08-26 Thread Daniel Veditz
On 8/17/11 5:21 AM, Jean-Marc Desperrier wrote:
 Also, at the moment it seems that Firefox's private mode doesn't
 properly protect against this.

That's a problem. We've tried hard to prevent disk writes in Private
Browsing mode, but I think the cache got a pass because performance
in Private Browsing mode would be too painful without it. Even
without ETags there's some information that can be gleaned about
site visits using timing attacks.

We definitely don't want to blow away the non-private-mode cache
when we go into and out of private mode (switching would take too
long, and some users would notice the bad performance after
switching back) but we could potentially set up an extra
private-mode cache. Pretty expensive in disk-space but might be
worth it if a lot of people spend time in private mode. Another
alternative could be to use only the memory cache in private mode,
that's a bit quicker to blow away when we switch modes.

Neither suggestion helps the ETags issue for users who stay within
one mode or the other; for now I'm focused only on the narrow issue
of leakage between modes which is a problem even if we solve the
ETags issue.

 I wonder if the end result could be to disable ETags and replace
 Last-Modified with a neutered header, where :
 - the browser formats the strings
 - only recent requests are precise, older one have a much bigger range

If we want to pretend to be a HTTP/1.0 dumb client we could do that,
yeah. Slightly better would be to remain HTTP/1.1-compliant for
same-origin requests and only play such games with 3rd party
requests. Unfortunately 3rd party sometimes includes the 1st
party's CDN. On the assumption the CDN content is likely to be
strictly static assets rather than personalized content dropping 3rd
party ETags might still work out.

On the modified dates if you assume clocks are perfectly in sync
(ha!) in theory you could safely send back any time between the
Last-Modified date and when the browser requested the content. In
practice there's clock skew, and some caches might do something dumb
(but fast) like if (browser-date != my-stored-date) send-it.
Either issue leads to a lot of cache misses, bad performance, and on
mobile perhaps huge bills for exceeding data caps.

When the bad guys (or at least slimy ones) use the specs against you
any defense means breaking legitimate uses.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Help about signed javascripts

2011-08-26 Thread Daniel Veditz
The script principal comes from the page origin, so in order to
run privileged script the page itself needs to be in the signed
archive. What you're trying to do simply won't work. It's been this
way for a long time because the type of mixing you are trying to do
led to security holes.

Although code-signing is still in the product it's never been an
advertised Firefox feature (it was inherited from Netscape 6), and
support will be removed in a future version. No date set, maybe next
year.

What you should do instead is create a JetPack that will do
whatever privileged operation you're trying to do. If you're OK with
users having to say OK to a big ugly permission dialog then asking
them to installa restart-less add-on one time shouldn't be a problem.

The obvious temptation is to inject an object or method into the
page, but it would be more secure to listen for events instead. Most
important would be to limit the effects of your add-on only to your
own site(s) to prevent your users from being hacked when visiting
malicious sites.

-Dan Veditz

On 8/18/11 12:53 PM, Luis Fernando Mendoza Cáceres wrote:
   Hello, first, sorry for this email, but I need a help,  I'm  a ASP
 developer and have 2 javascript files ( utilities.js and getdata.js) , I
 need to sign this files , so  my users don't need to go to about:config
 and set signed.applets.codebase_principal_support in true.
 
 I create a file  secure-scripts.jar with signtool comands (-Z), but in my
 site (Ex. First_page.ASP ) added the lines:
 
  script SRC=jar:secure-scripts.jar!/js/utilities.js/script
  script SRC=jar:secure-scripts.jar!//js/getdata.js/script
 
 The site is loading the scripts, but display this error:
 
 Error: A script from http://mysite; was denied UniversalXPConnect
 privileges
 
 But If  in this .jar I create a test.html and in the url of browser write :
 jar:http://mysite\secure-scripts.jar!//test.html , this works. But I can't
 do it this, because the javascript files are loading in every page of my
 site and I can't package all site.  Also if I add a ASP page in .jar file,
 don't works.
 
 I need this working very soon and can't fix the problem. If you can help me
 please, I hope that you can help me.
 
 Sorry for this. I'm not speak english, my language is spanish.
 
 
 Atte. Luis Fernando

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP and object URLs

2011-07-26 Thread Daniel Veditz
On 7/22/11 7:18 PM, Eli Grey wrote:
 CSP needs a way to support object URLs, of which the scheme is
 implementation specific (e.g. moz-filedata:{GUID} in Firefox,
 blob:{origin}{GUID} in WebKit). How might this be accomplished?

This is a better conversation for public-web-secur...@w3.org where
we're working on standardizing CSP -- added with a CC though this
conversation is likely to fork.

Off the top of my head I think we should treat those as coming from
'self' since the data is ultimately available to the page and under
its control.

If that doesn't work another option is to treat them similarly to
data: urls: block them unless explicitly allowed and let them be
whitelisted by scheme alone.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Question regarding anti-phishing protection in Firefox 3/4

2011-03-30 Thread Daniel Veditz
On 3/24/11 10:04 AM, Florian Coulmier wrote:
 Hum, strange. It is flagged as phishing under both Firefox and Chrome
 for me. I have tried on several computers.
 Could it be a country-based policy ?

I don't believe so. At least there's nothing in the request that
signifies the country and Google hasn't told us they use GeoIP for
the responses.

Are you sure it was flagged as phishing and not malware? We (and
Chrome) check against both lists. There are slight wording changes
in the block message to indicate which is which.

Our code looks for the tables goog-phish-shavar and
goog-malware-shavar. You mention googpub-phish-shavar in your
original message. According to this old chromium bug they are
different, but I don't know what the differences are. When we worked
with Google to use the service we just used the ones they told us to
use.

http://code.google.com/p/chromium/issues/detail?id=5597

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: embedding local video in a remote page

2011-03-30 Thread Daniel Veditz
On 3/22/11 7:47 AM, pike wrote:
 In an installation that uses firefox, I need to embed videofiles that
 are located on the local hard disk into webpages that are served by a
 remote machine.

Browsers and plugins don't allow web content to reference local
files for security reasons (as you surmised).

What is your scenario? A user has a bunch of locally-saved video and
browses to your web site in order to view them? Seems unusual given
that all platforms ship with capable video viewers these days.

Couple of things about enablePrivilege.

1) combine them -- each call brings up a dialog. There's no reason
you can't request all the privs you need in one call with a
space-separated list.
2) UniversalXPConnect is overkill, a superset of the other privs
you're asking for. It's equivalent to installing software, and if
you're going there then just install software.
3) You should be able to get what you want with a single
UniversalFileRead permission and have the user remember it for
themselves. Don't mess with capability prefs!

All of the above is trumped by us removing support for
enablePrivilege entirely as an unnecessary, unsafe, vendor-specific
feature. Since the code you're writing is already Mozilla-specific
there's no reason not to go ahead and write an add-on and have users
install that. Use the add-on builder SDK and produce restartless
addons and there's really no difference between the user agreeing to
the install and agreeing to a scary enablePrivilege dialog. How
important is it that this happen on your site, as opposed to being a
feature for users? The add-on could detect it's your site and
replace some placeholder content with the video link for you.

https://developer.mozilla.org/en/building_an_extension
https://addons.mozilla.org/en-US/developers/tools/builder

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Security Problem?

2011-02-28 Thread Daniel Veditz
On 2/25/11 8:40 PM, Gus Richter wrote:
 The question I have is this:
 1. This is MY Bookmarks Toolbar and includes links of MY choosing.
 How is it possible that one of my links can be changed by some
 outside source?
 2. If it is as easy as it seems, then could a destructive change
 (read security problem) also be possible?

You don't say where your bookmark linked nor where you ended up, but
a web site you link to can always add a redirect on their end at any
time. The bookmark itself doesn't change, but where you end up does.
The icon shown is refreshed when the bookmark code thinks the page
loaded changed its icon.

-Dan Veditz

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Does the png module contains the fixes of vu643615 and vu576029

2010-11-22 Thread Daniel Veditz
On Nov 23, 2010, at 11:27 AM, Brian Lu wrote:
 The README file under modules/libimg/png says that its version
 is 1.2.35 (contained in Thunderbird 3.1.6), while the bug
 https://bugzilla.mozilla.org/show_bug.cgi?id=492200 says it has
 upgraded to 1.2.37.

The bug _summary_ says upgrade libpng to 1.2.37 but you have to
read the bug to see what really happened. The trunk (Firefox 4
development version) did indeed get an upgraded libpng, but the
branch simply took the relevant security fix on top of the 1.2.35 it
already had.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP policy questions

2010-10-07 Thread Daniel Veditz
On 10/7/10 12:45 PM, =JeffH wrote:
 Ok, suppose I have this origin i wish to protect 
 https://www.example.com;, the origins http://example.com; and 
 https://example.com; redirect to the former,

I'm not sure of the relevance of the re-directs here. You don't
include plain example.com in any of your policies so attempts to
load content from there will be blocked before it gets a chance to
redirect.

If you do load content from a host which redirects, each host in the
redirect chain must show up in the whitelist.

 X-Content-Security-Policy:   \ allow 'self'
 \ -- returns HTML https://sub1.example.com  \
 --JS, CSS https://sub2.example.com  \ --
 IMGs

Since your original host is HTTPS you don't need to specify it on
the above, although the redundancy doesn't hurt.

 foo.otherexample.com  \ --IMGs 
 bar.yetanotherexample.com ;   \ --IMGs

These hosts inherit the https:// scheme. Not that I'm encouraging
mixed-mode, but if these are intended to be http: servers you need
to specify it. In practice the scheme issue is more likely to come
up in an http: page that includes https: elements--the https: would
have to be explicit in the policy. Also, https:'self' isn't valid.
If you're loading resources from yourself securely and insecurely
you'll have to spell out the host name at least once.

If you know for a fact that JS will _ONLY_ come from
sub1.example.com then I would strongly urge adding script-src
sub1.example.com; to the policy to protect against HTML injection
of a script tag pointing at one of the other hosts. You have
(presumably) no control over foo.otherexample.com so why risk your
site's security on their ability to secure their site?

In fact, I'd strongly recommend every policy explicitly identify
both script-src and object-src directives (and use 'none' if you
can). Beyond that you can decide if you want to whitelist by type or
be lazy (or positive spin: header-space conserving) and use a
catch-all allow directive.

In the policy above your site might only have HTML, JS, CSS and
IMGs, but by using the allow policy you're allowing injection of
embed, @font-face, video and frames to any of the list of
specified domains.

 1. should one consider setting CSP policies for the other
 origins one controls, eg sub1.example.com and/or sub1.example.com
 ?

Currently we're only looking at CSP for document loads (including
frames). Content Security Policy is a broad term and we expect
there will be interest in adding other kinds of policy in the future.

 to prevent anything from working if one loads directly from one
 of those origins (just in case), then one could imagine having
 them emit a allow 'none' policy of their own.

That would work if they're loaded as documents. The in-line content
could still be framed unless you also add ancestor-frames 'none';
(or https://www.example.com or whatever is appropriate).

 2. the other origins not under example.com's admin control might
 for whatever reason emit their own CSP policies.

They might, but they won't do anything except when loaded as framed
documents. Apart from CSP those sites can also use X-Frame-Options:
DENY to prevent framing, ancestor-frames is just the same feature
with finer-grained control (we considered extending X-Frame-Options
instead but that breaks interoperability with other browsers). Apart
from framing sites can inspect Referer: and possibly Origin: to
prevent resource use. If your page is including 3rd party content
you should have some kind of an understanding with that content's host.

 3. how do any CSP policies emitted by these origins other than 
 https://www.example.com; interact with the latter's policy? The
 CSP draft spec presently explicitly says (emphasis added)..
 
 When multiple instances of the X-Content-Security-Policy HTTP
 header are present in /an HTTP response/...

This doesn't apply to the case you're asking about. The policies on
different resources aren't mixed, but a single resource might have
multiple X-Content-Secuity-Policy headers. In the bad case maybe one
or more of them is maliciously injected, so we want to tighten
policies rather than loosen. Better header injection lead to Denial
of Service than that it let an attacker enable XSS on your site.

Multiple headers might arise legitimately if a web app specified a
restrictive policy for an individual page while the site's server
infrastructure imposed a looser blanket policy on everything loaded
from that domain.

I'm not sure how useful this will be in practice, but it was either
intersect the policies or assume multiple headers are an attack and
switch to allow 'none';.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Network Solutions serving malware from parked domains

2010-08-16 Thread Daniel Veditz
Follow-ups set to mozilla.dev.security

On 8/16/10 4:52 PM, Gen Kanai wrote:
  This is pretty involved so worth reading in detail.
 
 http://blog.armorize.com/2010/08/more-than-50-network-solutions.html
 
 Personally, I hate domain parking and I think ICANN should outlaw it,
 but that's a separate issue.
 

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP - Cookie leakage via report-uri

2010-06-12 Thread Daniel Veditz
On 6/11/10 1:31 PM, Sid Stamm wrote:
 On 6/11/10 1:19 p, Devdatta wrote:
 Sid : do you have feedback from developers that this data is
 absolutely necessary for debugging ? 
 
 Potentially the data stored in the cookies could control what resources
 are requested by a page, or an attack could otherwise depend on the
 contents of the cookies. The cookie data are useful for debugging the
 same way the URL's querystring is useful.

If the ReportURI is same-origin with the page then we will be
sending all the cookies with the report POST; if it's on the same
eTLD+1 domain we will at least be sending the same domain cookies
with could well be good enough. If the site using CSP needs to
preserve cookies to figure things out why not leave it up to them to
do so rather than risk misdirecting Auth data?

Sure, it's less convenient-- security always is.

 In absence of a good solution, it's probably a
 good idea to avoid sending auth/sensitive headers via the report URI.

I say let's whitelist the headers we do think are useful and drop
everything else. That way we won't get blindsided when HTML 5 or
whoever adds new headers that leak sensitive information. We save a
lot of useless bloat in the reports, too, which admins might thank
us for.

As a start what if we only reported Host, Referer, and Origin?
And maybe the User-Agent in case various CSP client implementations
interpret things differently, although the User-Agent is available
from the report POST itself as well.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


test -- why so quiet

2010-05-16 Thread Daniel Veditz
This group, admittedly low traffic, seems suspiciously quiet. Are
posts working?
-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: test -- why so quiet

2010-05-16 Thread Daniel Veditz
On 5/16/10 2:21 PM, Michael Kohler wrote:
 On 05/16/2010 11:19 PM, Daniel Veditz wrote:
 This group, admittedly low traffic, seems suspiciously quiet. Are
 posts working?
 -Dan
 
 Works fine.

Thanks. I hadn't seen anything since the HITB spam on April 22.

So as to make this thread not 100% on the noise side of the
signal-to-noise ratio I'd like to point out a Firefox development
that I don't believe I saw mentioned here:

Firefox trunk nightlies have a fix for the CSS history privacy
issue, which allow sites to silently data-mine visitors for every
site--or even specific page--they've gone to. The problem is
inherent to the CSS spec and affects all browsers (that support
CSS). We hope our solution proves viable, although a few sites do
use the feature legitimately and will therefore degrade.

http://blog.mozilla.com/security/2010/03/31/plugging-the-css-history-leak/

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSP : What does allow * mean?

2010-03-18 Thread Daniel Veditz
On 3/13/10 6:13 AM, Nick Kralevich wrote:
 On Fri, Mar 12, 2010 at 5:24 PM, Brandon Sterne
 bste...@mozilla.com wrote:
 2) How does one specify a wildcard for any protocol?
 
 I don't think we should allow that.  Do you have a reason to
 believe we should?
 
 IMHO, any policy language needs to cover the entire range of
 policies, from completely *permissive* to completely
 *preventative*.

Content-Security-Policy is an attempt to develop a declarative
security stance. Blacklisting fails, you will never know all
potential sources of evil attacks. You hopefully _do_ know the
source of everything you intend to show up on your site (you might
not given advertising networks, but then again ads are sometimes a
source of evil attacks). If you can declare those sources and the
client can block everything else then we think we've made your site
less susceptible to attack.

In that view wildcards are a botch from the beginning, but like
allowing inline-scripts (another self-defeating feature) we think
it's necessary as a transition device.

When it comes to protocols, though, we intend to be strict. There
should be no way ever to whitelist '*' protocols. Who knows what
someone will invent in the future and how abusable it might be? If
you use it you can list it, otherwise the web is http/https only
in our book.

 It seems like the only way to write a completely permissive
 policy is to explicitly list out all possible schemes, which is
 awkward (IMHO).

CSP isn't designed for a permissive policy. If you add a CSP header
you are explicitly opting into a non-permissive regime. Writing a
CSP policy is only awkward if you don't know what you're hosting on
your pages, and if you don't know you can't really secure it anyway
so why bother with CSP? The most awkward thing about CSP will be
eliminating inline-scripts, not writing the policy.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Allow CSP on HTML meta tags

2010-03-12 Thread Daniel Veditz
On 2/28/10 6:43 PM, Axel Dahmen wrote:
 Actually I still can't find a fair reason for omitting the option of
 allowing HTML meta tags to provide CSP directives.
 
 * By means of the intersection algorithm, a meta CSP directive can
 only tighten security but not loosen.
 
 * Disallowing meta tags would cause a significant number of private
 websites to not being able to use this security feature. Does someone
 really want to exclude all these users from the spec? Just because it
 would cause more effort implementing it? What's more important?

If we knew that there really were all these users clamoring to use
CSP it might be worth working through the complexities, but until we
get a working version out there we won't really know what works and
what doesn't in the real world. It is far, far easier to add meta
support later if we need it than to remove a feature if we decide
it's not working out.

Not too worried about injected meta tags, we just have to make
sure it can only restrict the page further (which we already have to
do to support multiple HTTP headers).

How do we handle a meta tag that comes after some content which a
policy should have regulated? If we decide to only honor meta tags
that come first then injecting such a header can disable CSP. If
we enforce CSP from that point on there's still page content that
avoided the policy. We could re-parse the entire page and enforce
things the second time around but the injection may have been able
to do its damage already.

This is not an academic question, I've seen a lot of pages with
malware content injected above the normal page content. Is best
effort CSP enforcement good enough? Would we be fostering a false
sense of security by supporting meta?

effort isn't why we cut it. The policy is designed to protect the
integrity of the content and it's much easier to reason about its
security properties and effectiveness when it's delivered external
to that content.

If CSP turns out to be an effective and accepted solution (no inline
scripts is pretty radical) and there's a need for meta support we
can add that during the standardization process. At the moment it's
hard to imagine who would benefit from it, though. Yes, I know there
are a lot of people who can't change their headers, but do those
people run web applications that could suffer from XSS and other
attacks CSP addresses?

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: MFSA 2010-03 exploitable with disabled Javascript?

2010-02-23 Thread Daniel Veditz
On 2/21/10 10:45 PM, Manuel Reimer wrote:
 my distributor, so far, didn't publish an updated package, so I'll have
 to keep with an old Firefox for some days.
 
 For all of the current holes, disabling Javascript seems to be OK for
 the meantime, according to your advisories, so I did so.
 
 Does this also work for the following hole:
 http://www.mozilla.org/security/announce/2010/mfsa2010-03.html

Disabling JavaScript should protect you for the time being. It
doesn't actually prevent the underlying parser flaw but people would
have to invent new techniques for manipulating memory without scripts.

Plugins essentially count as scripts for this purpose though, so you
need to turn those off too. Or use FlashBlock and only play videos
you're confident are OK--let other people go first :-)

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Does it ever make sense that a web page can have chrome privs?

2010-02-23 Thread Daniel Veditz
On 2/23/10 6:15 PM, Boris Zbarsky wrote:
 On 2/23/10 8:14 PM, Natch wrote:
 I was thinking (in bug 491243) that channels shouldn't inherit chrome
 privileges ever unless they are data, javascript or chrome channels
 (or that sort).
 
 That's already the case.

The documents can end up privileged if an author does the wrong thing:
https://bugzilla.mozilla.org/show_bug.cgi?id=476464

 For example, it is possible for any web site to run in an elevated
 context(and do practically anything to the user's computer) if you
 type the following in the error console command-line:

 window.openDialog(http://www.google.com;);
 
 This doesn't run google in an elevated context.

Hard to tell on Google, but easy to confirm the lack of privs with
something like
  openDialog(https://www.squarefree.com/shell/shell.html;)
then try to look at Components.stack or something privileged.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Fix for the TLS renegotiation bug

2010-02-14 Thread Daniel Veditz
I'm surprised not to see it mentioned here yet, but Firefox
nightlies implement the new TLS spec to prevent the renegotiation
flaw. The fixes in NSS can also be used to build your own patched
version of moz_nss for apache.

Huge thanks to Nelson Bolyard for implementing the spec in NSS and
Kai Engert for the client (Firefox) integration piece.

To solve the problem for real in the long run both servers and
clients need to be patched, and patched clients and servers must not
talk to unpatched servers and clients. In the short run that's
unrealistic so the Firefox settings are currently extremely
permissive, but paranoid users who only need to talk to a couple of
servers that they know are patched could make it strict if they like.

Test server at https://ssltls.de

Firefox nightlies have been patched since Feb 8 or 9
https://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/

Kai's write-up on the various client options
https://wiki.mozilla.org/Security:Renegotiation

Official RFC (released Friday)
http://tools.ietf.org/html/rfc5746

Currently the only change in Firefox behavior is that it will not
RE-negotiate with an unpatched server--but it will complete an
initial handshake so it's still vulnerable to the flaw. This will
break client-auth in most cases so there's a global pref that allows
unsafe renegotiation, and another pref so you could whitelist a
server or two you need to do client-auth with.

Firefox will also spit out messages to the error console for each
unpatched server it encounters. Another pref will show broken-ssl
indicators for such servers and yet another will refuse connections
to unpatched servers if you really want to get hardcore (and not use
SSL at all for a while).

These are _test_ builds and don't necessarily reflect how we'll ship
a future Firefox update. For updates on the stable branches we'll
probably have to allow unsafe renegotiation for a while, it's not a
good strategy to ship a security updates and force people to choose
between security and connecting with their bank/gov't/work. Or we
might have to do some UI work so affected users can tweak this
without having to wade into about:config.

Although currently not an option, one approach might be to downgrade
EV sites to normal SSL if they aren't patched and at least put
pressure on the sites that think they have something to protect.
Later this year we can start showing broken SSL indicators for
unpatched servers, when at least some servers are patched.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox Add-ons

2010-02-07 Thread Daniel Veditz
On 2/6/10 8:08 AM, David E. Ross wrote:
 Add-ons there go through some degree of review before being available to
 the public; before such reviews are concluded, add-ons require a user to
 logon to his or her own account and receive a warning that the review is
 still underway.

Unfortunately that's no longer true, the login requirement was
deemed too burdensome. Now the user just has to check a box Let me
install this experimental addon. A mere speed bump to pwnage.

I, too, hated the login requirement -- not because it was too hard
but because it was too easy. We're dangling forbidden fruit in front
of unsuspecting people (this thing might fit your needs, but you
shouldn't install it). The unreviewed addons should go on a
completely separate site and not show up in AMO search results, just
as Firefox experimental nightly builds aren't available from the
product pages on mozilla.com.

 The checkbox idea is even worse -- everything on the page exudes
You're on the trusted Mozilla site, they wouldn't let anything bad
happen to you would they? An analogy I've used before: if you went
to your favorite bakery and they were offering experimental
muffins you might expect them to taste bad. You would not expect
them to be laced with heroin because the shop is giving shelf space
to anything dropped off at the back door by who knows who.
experimental does not cover it.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Paper: Weaning the Web off of Session Cookies

2010-01-27 Thread Daniel Veditz
On 1/27/10 12:20 PM, Timothy D. Morgan wrote:
 Cool, there are some great UI ideas there.  I particularly like the
 examples that eliminate favicons. ;-)
 
 I would think that moving toward HTTP authentication schemes, such as
 digest, would make it much easier to automate a good identity manager.
 Would you agree?

We can't control what web sites do, but if we make the experience nicer
more sites may be encouraged to use things like HTTP Auth. Personally
I'd like to see client certs used for auth but we really have a lot of
work to do to make that a pleasant experience for anyone.

 Another thought I had on performing logouts, which is not presented in
 the paper, is that if the XMLHttpRequest W3C standard is finalized and
 fully adopted by browsers as is, then one might be able to use
 JavaScript to clear credentials

As someone who regularly disables JavaScript I'd hate to see client auth
require it.

 You must be the Tim who started the Past proposals for HTTP Auth
 Logout thread and if so you're already involved in the right place for
 that.
 
 Heh, you did your homework.  Yes, I did start that thread. 

No creepy stalking involved, honest :-) I remembered the topic came up
on the httpbis mailing list recently so I went to see if they had
reached any kind of consensus in the group.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: A new false issued certificate by Comdo?

2009-11-07 Thread Daniel Veditz
On 11/5/09 10:37 AM, Kyle Hamilton wrote:
 then why not create an internal build of Firefox, embed your own root
 into it, and issue certificates from that root to the boxes that need
 it?

You don't need a special build, of course. Anyone can easily add a new
root into modern desktop browsers. It may be less easy with other
internet devices (phones, for example).
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: A new false issued certificate by Comdo?

2009-11-07 Thread Daniel Veditz
On 11/5/09 5:16 AM, Paul van Brouwershaven wrote:
 What do you think of this certificate with the CN
 owa.b3cables.co.uk\ , again issued by Comodo.
 
 Serial: D2D0DAD5A1C3E785844AA3C72CA2B191
 
 Not in CRL number 2361 Last Update Nov  5 12:35:19 2009 GMT

CA's can prune expired certs from their CRLs to keep the size down. The
fact that it's not in a recent CRL doesn't tell whether it was or wasn't
in the past.

The extra space renders that domain unusable (in Firefox, at least) so
it's a bit of a self-limiting problem, but I wouldn't crucify a CA for a
trailing space issue. It's a bug they should fix, but let's stick to the
more substantive examples you raised.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-27 Thread Daniel Veditz
On 10/27/09 2:33 AM, Adam Barth wrote:
 I understand the seductive power of secure-by-default here.

If only she loved me back.

 This statement basically forecloses further discussion because it does
 not advance a technical argument that I can respond to.  In this
 forum, you are the king and I am but a guest.

I don't think we're having a technical argument, and we're not getting
the feedback we need to break the impasse in this limited forum. Either
syntax can be made to express the same set of current restrictions.
You're arguing for extensible syntax, and I'm arguing for what will best
encourage the most web authors to do the right thing.

An argument about whether your syntax is or is not more extensible can
at least be made on technical merits, but what I really want is feedback
from potential web app authors about which approach is more intuitive
and useful to them. Those folks aren't here, and I don't know how to
reach them.

At a technical level your approach appears to be a blacklist. If I'm
understanding you correctly, if there's an empty CSP header then there's
no restriction whatsoever on the page. In our version it'd be a
locked-down page with a default inability to load source from anywhere.
If the web author has left something out they will know because the page
will not work. I'd rather have that than a web author thinking they're
safe when CSP isn't actually turned on for their page.

The bottom line, though, is I'm in favor of anything that gets more web
sites and more browsers to support the concept.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-26 Thread Daniel Veditz
On 10/22/09 6:09 PM, Adam Barth wrote:
 I agree, but if you think sites should be explicit, doesn't that mean
 they should explicitly opt-in to changing the normal (i.e., non-CSP)
 behavior?

They have already opted in by adding the CSP header. Once they've
opted-in to our web-as-we-wish-it-were they have to opt-out of the
restrictions that are too onerous for their site.

 It seems very reasonable to mitigate history stealing and ClickJacking
 without using CSP to mitigate XSS.

It seems reasonable to mitigate both of those without using CSP at all.
History stealing is going to come from attacker.com where they aren't
going to add headers anyway. The proposed CSP frame-ancestors could just
as easily go into an extended X-Frame-Options (and be a better fit). And
it's really only a partial clickjacking defense anyway so maybe that
aspect should go into whatever defense feature prevents the rest of
clickjacking. NoScript's ClearClick seems to do a pretty good job
(after a rough start) and gets to the heart of the issue without
requiring site changes.

 I think we're all agreed on this point.  Our current disagreements appear to 
 be:
 
 1) Whether frame-src should be in the resources module or in the same
 module as frame-ancestor.
 2) Whether sites should have to opt-in or opt-out to disabling inline
 script and/or eval-like APIs.

I don't think this is the right venue for deciding the latter, the
audience here just doesn't have enough of the right people. We feel
extraordinarily strongly that sites should have to explicitly say they
want to run inline-script, like signing a waiver that you're going
against medical advice. The only thing that is likely to deter us is
releasing a test implementation and then crashing and burning while
trying to implement a reasonable test site like AMO or MDC or the
experiences of other web developers doing the same.

The eval stuff I feel a lot less strongly about the default, but feel
there's value in consistency of having site authors loosen restrictions
rather than have some tighten and some loosen.

-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CSRF Module (was Re: Comments on the Content Security Policy specification)

2009-10-22 Thread Daniel Veditz

On 10/22/09 10:31 AM, Mike Ter Louw wrote:

Any ideas for how best to address the redirect problem?


In the existing parts of CSP the restrictions apply to redirects. That 
is, if you only allow images from foo.com then try to load an image from 
a redirector on foo.com it will fail if the redirection is to some other 
site. (This has turned out to be an annoying part of CSP to implement as 
redirects happen deep in the network library far from the places that 
have the context to enforce this rule)


Likewise your anti-csrf rules should propagate through redirects for 
consistency.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Required CSP modules (was Re: CSRF Module)

2009-10-22 Thread Daniel Veditz

On 10/22/09 3:58 PM, Lucas Adamski wrote:

CSS is content importing.. oh but IE allows CSS expressions so its a
XSS vector too.


IE8 has killed expressions off, our CSP spec says -moz-binding has to 
come from chrome: or resource: (that is, be built in). 
https://wiki.mozilla.org/Security/CSP/Spec#XBL_bindings_must_come_from_chrome:_or_resource:_URIs


That's a pretty vendor-specific thing to put in CSP, I think we just 
want to kill or restrict -moz-binding in the product in general (as IE 
has done) and not worry about it in CSP. Either way, though, we can 
treat CSS only from the data-loading aspects. The implication of that is 
that we don't need to worry about inline-style which has been raised 
recently.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: security.OCSP.require in Firefox

2009-10-13 Thread Daniel Veditz

On 10/13/09 9:23 AM, Johnathan Nightingale wrote:

The temptation to attach UI to this problem sets off
blame the user alarms for me - do we think that uses will make better
decisions with this information? Like I say, I don't think we're at
WONTFIX on this question, but I don't think it's an easy problem to
solve correctly, either.


For the user this is merely informational (and therefore fully open to 
the charge of clutter). The idea is to enlist site authors/owners in 
pressuring CA's to step up their support so the uglybar on their site 
goes away. As opposed to completely blocking the site at which point the 
pressure is on _us_ for the CA's failure.


-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: security.OCSP.require in Firefox

2009-10-13 Thread Daniel Veditz

On 10/13/09 10:12 AM, Eddy Nigg wrote:

#B is important because we are already month after the alleged bug
happened, plenty of time to get the act together. I think this warrants
some actions, a review and renewed confirmation of compliance might be a
good thing to do in this case.


These certs were revoked within days of the BlackHat talk. The leaked 
cert is an old cert, we are not talking about a CA clueless for the past 
ten weeks. IPSCA mailed us on Aug 3 that they had identified and revoked 
nine bogus certs and had stopped issuing any certs until they fixed 
their process to detect these attempts. From the domains involved we 
pretty much know who bought the certs, Moxie of course, and two other 
speakers we know about on the hacker-conference speaking circuit.


What we didn't know is that any of those three were irresponsibly 
handing out the private keys to the certs.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: security.OCSP.require in Firefox

2009-10-13 Thread Daniel Veditz

On 10/13/09 5:14 PM, Lucas Adamski wrote:

For the small percentage of (power) users that can really understand the
implications of a question like the OCSP URL provided by this
certificate does not appear to be valid at this moment, would you like
to continue,


Just to be clear at no point did I propose a modal click-through type 
UI. Just a visible UI notification like an info bar, door hanger 
notification, a lock with a question-mark over it... something visible 
but not interfering. I was envisioning an info bar because that leaves 
us space to put some explanatory text in it, but I didn't want to get 
specific.


Like the slashed-lock that indicates mixed mode (and is far too subtle 
for my tastes) I expect a lot of users wouldn't even notice, but hope 
that enough will to nudge things in the right direction.


In the longer run we can get some decent non-EV CA guidelines and then 
switch on the hard-fail. Given Johnath's lack of love for the suggestion 
the longer run might be the short path.


-Dan
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: security.OCSP.require in Firefox

2009-10-12 Thread Daniel Veditz

On 10/10/09 7:47 AM, Alexander Konovalenko wrote:

Why is security.OCSP.require option set to false by default?


Currently there is no requirement that CA's support OCSP for non-EV
certificates, so some CA's don't. It would be nice if they then didn't
put OCSP URLs into their certs, but some do anyway (aspirational OCSP?).
The end result was that in our testing too many sites were unreachable
with this setting set to true. However the site owners and our users
complained that since it worked in IE the blame must lie with Firefox.

It's getting much better since most of the largest CA's now offer EV
certs and have beefed up their infrastructure. We are working with the
CA/Browser forum to make OCSP support a requirement for non-EV certs


Obtaining a rogue certificate for existing website turns out to be
surprisingly easy due to poor verification procedures of some CAs.


The surprise is that it is occasionally possible. I wouldn't 
characterize it as easy or it wouldn't be such a big deal each time 
someone finds a way to do it. If you know of a bad CA please let us know 
so we can investigate and remove them from the product if necessary.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Content Security Policy updates

2009-07-23 Thread Daniel Veditz
Sid has updated the Content Security Policy spec to address some of the
issues discussed here. https://wiki.mozilla.org/Security/CSP/Spec

You can see the issues we've been tracking and the resolutions at the
Talk page: https://wiki.mozilla.org/Talk:Security/CSP/Spec

There are still a few open issues.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Daniel Veditz
Jean-Marc Desperrier wrote:
 In fact a solution could be that everytime the browser reject
 downloading a ressource due to CSP rules, it spits out a warning on the
 javascript console together with the minimal CSP authorization that
 would be required to obtain that ressource.
 This could help authors to write the right declarations without
 understanding much to CSP.

Announcing rejected resources is an important part of the plan. The spec
has a reportURI for just this reason, and the Mozilla implementation
will also echo errors to the Error Console.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: CRMF encoding issues with window.crypto.generatedCRMFRequest()

2009-07-17 Thread Daniel Veditz
Moving discussion to mozilla.dev.tech.crypto, but do go ahead and file
bugs. I doubt 3.5 behaves any differently than 3.0 (you did mean 3.0.10,
right? If you're using Firefox 2 please stop).

nk wrote:
 Hi all,
 I am researching the window.crypto.generatedCRMFRequest() function
 available on FireFox (I am using FF 2.0.10).
 Now, if requested keys are for signing - everything looks good.
 But if requested keys are for key exchange (e.g. rsa-ex), the
 generated CRMF request structure has a number of issues.
 
 Here are the issues I am facing:
 1) A PKIArchiveOptions control is included (http://www.ietf.org/rfc/
 rfc4211.txt, section 6.4). The EncryptedKey structure in it is encoded
 as a SEQUENCE while it actually is a CHOICE. Our CRMF decoder is
 throwing as soon as it sees this structure. Shall I raise a bug ?
 2) The EncryptedKey is encoded as the now deprecated EncryptedValue
 structure. Is there a plan to encode the value with EnvelopedData
 structure any time soon ?
 3) Finally, the ProofOfPossession structure looks broken in this
 scenario as what we see is: A2 05 80 03 00 03 00, which does not seem
 to relate to any of the permitted options desrcibed in RFC 4211,
 section 4. FYI: If CRMF request contains cert request for a signing
 key pair the ProofOfPossession is valid (a correct instance of
 POPOSigningKey) and is correctly verified by our decoder.
 
 Does anyone know if these issues have been addressed in FF 3.5 and if
 not, will they be addressed in the next releases of FF ?
 
 Many thanks in advance,
 Nikolai Koustov.
 
 
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy Spec questions and feedback

2009-07-08 Thread Daniel Veditz
EricLaw wrote:
 ---
 Versioning
 ---
 User-Agent header
 What’s the use-case for adding a new token to the user-agent header?
 It’s already getting pretty bloated (at least in IE) and it’s hard to
 imagine what a server would do differently when getting this token.

The UA approach may be a botch, but it was an attempt at something like
a less-verbose Accept-type header (six bytes in the UA, many more as a
separate header which would have to be sent with every request, with no
servers today actually understanding anything about CSP). Should the
policy syntax ever change a server could theoretically send different
syntax to a CSP/1 browser and a CSP/2 browser.

The other approach is to version the response, a few extra bytes only
when a server supports CSP. Yay, bandwidth win! But then what do we do?
How does the server know which version to send? Should it send every
version it knows about, and the client process the highest version it
knows how to process? That means if we ever have a CSP-2 either clients
are sending two complete headers (or three, or more) or they're sending
their preferred version and users of clients which only support CSP-1
get zero protection rather than the 99% they actually support.

In the case of brand-new directives older clients can simply ignore
unknowns and that will work OK in many cases. Either loads of that type
aren't supported at all (e.g. downloadable fonts, maybe?) or they can
reasonably fall-back to the default allow directive. That might leave
users of older clients vulnerable for that type (or only partially
protected), but no worse than users of browser that don't support CSP at
all.

What if we change the rules? Suppose we add a head keyword to the
script-src directive. Older clients will think that's a host named
head and strip all the in-line head scripts the site relies on. In
that case a versioned response actually works better for the site. The
older clients get zero protection, much less than they are capable of
providing (but the site still has to work to protect legacy browsers
with no CSP at all), and at least the content isn't broken.

 frame-ancestors
 What exactly an “ancestor” is should probably be defined here.

Would frame-parents make any more sense? Ties in to the window.parent
property rather than introducing a new name for the concept

 The “how many labels can * represent”  problem has come up in a number
 of contexts, including Access-Control and HTTPS certificate
 validation.  In the latter case, * is defined in RFC2818 as one DNS
 label, but Firefox does not currently follow that RFC.

Firefox 3.5 does, actually. The regexp syntax followed in older versions
of Firefox was inherited from Netscape and predated the RFC by years. A
small but vocal minority took advantage of the feature for internal
servers, but given the lack of support in other browsers it was well
past time to let it go.

 • The spec might want to note that using wildcards does not permit
 access to banned ports 
 http://www.mozilla.org/projects/netlib/PortBanning.html.

Maybe an implementation note saying nothing in CSP prevents a user agent
from blocking loads for other reasons. AdBlock will block additional
loads, NoScript will block scripts, LocalRodeo will block access to
RFC1918 addresses, etc. The Content Security Policy allows a site to
define _additional_ restrictions it would like the client to impose for
that content, but in no way is intended to loosen restrictions already
imposed by the client for its own reasons.

 • Scheme wildcarding is somewhat dangerous.  Should the spec define
 what happens for use of schemes that do not define origin
 information?  (javascript: and data: are listed, but there are
 others).

I am personally 100% against scheme wildcarding. There are so few
schemes a site could reasonably want to allow that it shouldn't be hard
to list them.

 X-Content-Security-Policy: allow https://self
 
 Doesn’t make sense to me, because “self” is defined to include the
 scheme.  This suggests that we need a selfhost directive, which
 includes the hostname only.

Doesn't make sense to me either. self should be a keyword. If you want
to stick schemes and ports on there then you should have to explicitly
state your FQDN.


 Violation Report: Headers
 This seems like a potentially significant source of security risk and
 complexity.  For instance, the client must ensure that it won’t leak
 Proxy-Authorization headers, etc.

Maybe we should explicitly define which headers we will send. Do the
Accept headers really help, for instance?

We definitely want the method and the path, Host, Referer, Origin (when
we have that), Cookie (and Cookie2 for UA's that support that). Anything
else?

The user-agent might be marginally useful for diagnostic purposes should
different clients start reporting different errors, but could probably
be gotten from the POST itself and not need to be repeated in the report
body. I don't care either way, 

Re: Content Security Policy Spec questions and feedback

2009-07-07 Thread Daniel Veditz
Sid Stamm wrote:
 Also, the “blocked-headers” is defined as required, but not all
 schemes (specifically, FTP and FILE) do not use headers.
 Removed the requirement to send request-headers from the XML schema
 (implied optional).

Just jumping off here on a related topic: What do we send as the
blocked-uri when we find inline script? Since this is perhaps the most
common injection type this would be a good one for an example.

I suppose we could leave blocked-uri empty and let people infer that it
was inline script from the violated directive. I'd rather be explicit
about it, but then blocked-uri might be the wrong name. Or do we leave
the blocked-uri empty (absent, or present-but-empty?) and use a keyword
like violated-directiveinline script/violated-directive

For clarification, if the entire policy was allow self othersite.com
and we tried to load an image in violation of that policy, would the
violated-directive be the implied img-src or the allow fall-back that is
actually specified? I imagine it would be the allow directive.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Content Security Policy - final call for comments

2009-04-07 Thread Daniel Veditz
Gervase Markham wrote:
 - but a declared (unexpanded) policy always has the allow directive.
 I think you need to make it more clear that allow is mandatory. But
 what was the logic behind making it so? Why not assume allow *, which
 is what browsers do in the absence of CSP anyway?

allow is not mandatory, but if missing it's assumed to be allow
none. If you explicitly specify the whitelisted hosts for each type of
load you might not need or want a global fallback which could only be
used to sneak through types you hadn't thought about. Future browser
features, for instance.

Maybe this does point out the need for some kind of version number in
the header, so future browsers can take appropriate action when
encountering an old header. For example, assuming none for any newly
added types.

 - policy-uri documents must be served with the MIME type
 text/content-security-policy to be valid This probably needs an x-
 until we've registered it, which we should do before deployment. It's
 not a complex process, I hear.

Until we get CSP onto a standards track they'd probably want us to use a
text/vnd.mozilla.something, and since we'd like other browsers to
support this I vote we go for the x- for now.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Add own algorithm to NSS

2008-09-03 Thread Daniel Veditz
???  wrote:
 I want to add my own cipher algorithm to NSS library, like gost engine
 in openssl, is it possible?
 If yes can anyone explain the procedure

See the patches that added Camellia and are trying to add SEED to NSS:
https://bugzilla.mozilla.org/show_bug.cgi?id=361025
https://bugzilla.mozilla.org/show_bug.cgi?id=453234

Something like that.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Bypassing or disabling firefox security to allow 'alwaysLowered' window

2008-09-03 Thread Daniel Veditz
rael wrote:
 On Sep 1, 11:48 am, Daniel Veditz [EMAIL PROTECTED] wrote:
 It looks like there's an extra check, in addition to having the right
 privelege (UniversalBrowserWrite) the window in question must have a
 chrome parent:

 http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/embedding/compone...
 
 Ugh.  It would have been nice to have this documented when I started
 on this ... oh well.  So, how do insure a window that I'm opening has
 a chrome parent?

By running chrome code, but you then say you don't want to do that:

 Thanks for your other suggestions, though if I cannot get
 alwaysLowered to work, even in a signed script, I'll have to give up
 on this.

Why is signed script with pre-installed prefs OK but a preinstalled
component not? Your signed script is already asking for permissions
equal to a preinstalled component.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Undesired XMLHttpRequest 401 behaviour

2008-09-01 Thread Daniel Veditz
Paul Cohen wrote:
 1. Client - Server: HTTP GET http://foo:[EMAIL PROTECTED]/new-
 
 Another question is: Why is the username and password sent in the URL?

Is it? Not what I see when I run a packet sniffer. If the UN/PW is being
sent in the GET message I'd expect the server to return a 404 -- they
don't know what to do with it either.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Bypassing or disabling firefox security to allow 'alwaysLowered' window

2008-09-01 Thread Daniel Veditz
Bill Lear wrote:
 Thanks for the note.  I realize that you can do this, but in our
 application, a kiosk, there will be no interaction with the user to
 accept these, as they will already have been accepted at the time of
 installation.

 In any case, it appears there is no way to get around firefox
 security, which is too bad.

It looks like there's an extra check, in addition to having the right
privelege (UniversalBrowserWrite) the window in question must have a
chrome parent:

http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/embedding/components/windowwatcher/src/nsWindowWatcher.cpprev=1.143mark=1536-1538,1541#1530

I missed the kiosk part the first time. If this is something installed
on the target machine then don't mess around with signing, just go ahead
and install privileged code. For that you don't need security help, just
development help from, for instance, the addon developer community.

One option, if content has to do this (safe non-web stuff, right?)
might be to implement a global object with a function call to do this.
For example you can look in the Firefox components directory and see how
the sidebar object was implemented in nsSidebar.js, but there are other
ways to do it.

Just remember to be careful handling URIs, especially javascript: and
data: uris. Best to whitelist only the schemes you expect (file: ?) and
if there's any chance of this application browsing to hostile content
you should whitelist the source domains you'll accept the command from.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Bypassing or disabling firefox security to allow 'alwaysLowered' window

2008-08-31 Thread Daniel Veditz
rael wrote:
 netscape.security.PrivilegeManager.enablePrivilege(UniversalPreferencesRead);
 netscape.security.PrivilegeManager.enablePrivilege(UniversalPreferencesWrite);
 netscape.security.PrivilegeManager.enablePrivilege(UniversalBrowserWrite);
 netscape.security.PrivilegeManager.enablePrivilege(UniversalFileRead);
 netscape.security.PrivilegeManager.enablePrivilege(UniversalXPConnect);
 netscape.security.PrivilegeManager.enablePrivilege(UniversalBrowserAccess);
 Do you have those in the function that actually opens the window?
 That's where you need the UniversalBrowserWrite, I think.

Note that you could (and should) request all the permissions you need in
a single call by passing them as space-separated tokens. That is
netscape.security.PrivilegeManager.enablePrivilege(
  UniversalPreferencesRead UniversalBrowserAccess);

Then the user will see a single popup to grant all privileges at once.

When you return from the function in which they are enabled they become
disabled again.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: very old, yet only partially accessible security bugs in Bugzilla?

2008-05-28 Thread Daniel Veditz
Alex K. wrote:
 There are some very old ( 2 years) security bugs in Bugzilla that have 
 been publicly accessible for over a year, but their proofs of concept 
 (exploit testcases) are still not publicly accessible.
 
 I assume this is an oversight, because the testcases were added as 
 attachments to individual comments.

No oversight, we keep some testcases private on purpose. We have an 
overwhelming bias toward openness in the Mozilla project, so we don't do 
it lightly. That bias for openness is why, for instance, we don't also 
hide the comments that mention the testcases. We're not hiding the fact 
that we're hiding something, we can defend it and are open to people 
keeping us honest by questioning it.

1) we will never release a full working exploit. We've learned the hard 
way that when we do we end up seeing them used in the wild.

2) sometimes the testcases are provided by people or organizations who 
don't want us to publish their work. This is mostly the same as reason 1 
except it's their judgment rather than ours.

3) Since people don't upgrade immediately (although it is pretty quick 
to get 90% of people) we sometimes keep proofs-of-concept private for a 
few weeks after the release to give people a chance to upgrade before 
the black-hats can figure out how to graft on an exploit. Once most 
people have upgraded there's not a lot of motivation for the black-hats 
to work up the exploit.

Sometimes we get busy and forget to go back and unhide this class later.

If the way to use a vulnerability is obvious from the patch or bug 
description we'll usually publish the testcase with the release anyway. 
If hiding doesn't given any actual benefit then openness wins.

4) if the testcase embodies a unique, generally unknown technique that 
might apply to other vulnerabilities we try to keep that quiet until we 
can make sure we don't have other problems in that area, and to prevent 
giving new tools to black-hats.

Even though _most_ people upgrade immediately, we still appear to have a 
million or so users on Firefox 1.5.0.x and two million or so on down-rev 
versions of Firefox 2. As a percentage of Firefox users that's fairly 
small, but it represents a lot of at-risk people if we start handing 
presents out to the black-hats.

 1. Mozilla Foundation Security Advisory 2006-05
 https://bugzilla.mozilla.org/show_bug.cgi?id=319847
 Reported December 2005, security tag removed April 2007
 Exploit test cases as attachments in comments #1,2,6,9,15

This is a working exploit, although it was fixed way back for Firefox 
1.5.0.1 and could probably be opened on that grounds. But it also has 
implications on still-secure 
https://bugzilla.mozilla.org/show_bug.cgi?id=295994

 2. Mozilla Foundation Security Advisory 2006-24
 https://bugzilla.mozilla.org/show_bug.cgi?id=327126
 Reported February 2006, security tag removed April 2007
 Exploit testcases as attachments in comments #1,24

Only one testcase, the attachment numbers are the same. It's a working 
exploit, but old and fairly isolated. I'll look into whether we can 
safely open this one.

-Dan Veditz
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security