Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Amus via dev-security-policy
Looking into this, we revoked the cert on our end at 2:20 MST (within 24 hours 
after the certificate problem report was processed), but we distribute all of 
our OCSP responses through CDNs. Distribution through the CDN took 
approximately an hour plus. I couldn't find a definition of revoked in the BRs, 
so I assume it's when we start distributing revoked responses, not when the CDN 
updates? Sorry for the confusion there. 

Jeremy



On Wednesday, June 21, 2017 at 2:39:57 PM UTC-6, Andrew Ayer wrote:
> On Tue, 20 Jun 2017 21:23:51 +0100
> Rob Stradling via dev-security-policy
>  wrote:
> 
> > [CC'ing rev...@digicert.com, as per 
> > https://ccadb-public.secure.force.com/mozillacommunications/CACommResponsesOnlyReport?CommunicationId=a05o03WrzBC=Q00028]
> > 
> > Annie,
> > 
> > "but these have been known about and deemed acceptable for years"
> > 
> > Known about by whom?  Deemed acceptable by whom?  Until the CA
> > becomes aware of a key compromise, the CA will not know that the
> > corresponding certificate(s) needs to be revoked.
> > 
> > Thanks for providing the Spotify example.  I've just found the 
> > corresponding certificate (issued by DigiCert) and submitted it to
> > some CT logs.  It's not yet revoked:
> > https://crt.sh/?id=158082729
> > 
> > https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0 does 
> > appear to be the corresponding private key.
> 
> 24 hours later, this certificate is still not revoked, so DigiCert is
> now in violation of section 4.9.1.1 of the BRs.
> 
> Regards,
> Andrew

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Andrew Ayer via dev-security-policy
On Tue, 20 Jun 2017 21:23:51 +0100
Rob Stradling via dev-security-policy
 wrote:

> [CC'ing rev...@digicert.com, as per 
> https://ccadb-public.secure.force.com/mozillacommunications/CACommResponsesOnlyReport?CommunicationId=a05o03WrzBC=Q00028]
> 
> Annie,
> 
> "but these have been known about and deemed acceptable for years"
> 
> Known about by whom?  Deemed acceptable by whom?  Until the CA
> becomes aware of a key compromise, the CA will not know that the
> corresponding certificate(s) needs to be revoked.
> 
> Thanks for providing the Spotify example.  I've just found the 
> corresponding certificate (issued by DigiCert) and submitted it to
> some CT logs.  It's not yet revoked:
> https://crt.sh/?id=158082729
> 
> https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0 does 
> appear to be the corresponding private key.

24 hours later, this certificate is still not revoked, so DigiCert is
now in violation of section 4.9.1.1 of the BRs.

Regards,
Andrew
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread andrewm.bpi--- via dev-security-policy
On Wednesday, June 21, 2017 at 1:35:13 PM UTC-5, Matthew Hardeman wrote:
> Regarding localhost access, you are presently incorrect.  The browsers do not 
> allow access to localhost via insecure websocket if the page loads from a 
> secure context.  (Chrome and Firefox at least, I believe do not permit this 
> presently.)  I do understand that there is some question as to whether they 
> may change that. 

Right, I wasn't taking about WebSockets in particular, but about any possible 
form of direct communication between the web app and desktop application. 
That's why I pointed to plain old HTTP requests as an example.

> As for whether or not access to localhost from an externally sourced web site 
> is "inherently a bad thing".  I understand that there are downsides to 
> proxying via the server in the middle in order to communicate back and forth 
> with the locally installed application.  Having said that, there is a serious 
> advantage:
> 
> >From a security perspective, having the application make and maintain a 
> >connection or connections out to the server that will act as the 
> >intermediary between the website and the application allows for the network 
> >administrator to identify that there is an application installed that is 
> >being manipulated and controlled by an outside infrastructure.  This allows 
> >for visibility to the fact that it exists and allows for appropriate 
> >mitigation measures if any are needed.
> 
> For a website to silently contact a server application running on the 
> loopback and influence that software while doing so in a manner invisible to 
> the network infrastructure layer is begging to be abused as an extremely 
> covert command and control architecture when the right poorly written 
> software application comes along.

I guess I don't completely understand what your threat model here is. Are you 
saying you're worried about users installing insecure applications that allow 
remote code execution for any process that can send HTTP requests to localhost?

Or are you saying you're concerned about malware already installed on the 
user's computer using this mechanism for command and control?

Both of those are valid concerns. I'm not really sure whether they're 
significant enough though to break functionality over, since they both require 
the user to already be compromised in some way before they're of any use to 
attackers. Though perhaps requiring a permissions prompt of some kind before 
allowing requests to localhost may be worth considering...

As I said though, this is kinda straying off topic. If the ability of web apps 
to communicate with localhost is something that concerns you, consider starting 
a new topic on this mailing list so we can discuss that in detail without 
interfering with the discussion regarding TLS certificates here.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Jeremy Rowley via dev-security-policy
And a common practice. Old Microsoft documentation used to recommend it.

> On Jun 21, 2017, at 12:22 PM, Santhan Raj via dev-security-policy 
>  wrote:
> 
> On Wednesday, June 21, 2017 at 12:02:51 PM UTC-7, Jonathan Rudenberg wrote:
>>> On Jun 21, 2017, at 14:41, urijah--- via dev-security-policy 
>>>  wrote:
>>> 
>>> Apparently, in at least one case, the certificate was issued directly(!) to 
>>> localhost by Symantec.
>>> 
>>> https://news.ycombinator.com/item?id=14598262
>>> 
>>> subject=/C=US/ST=Florida/L=Melbourne/O=AuthenTec/OU=Terms of use at 
>>> www.verisign.com/rpa (c)05/CN=localhost
>>> issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at 
>>> https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
>>> reply
>>> 
>>> Is this a known incident?
>> 
>> Here is the (since expired) certificate: 
>> https://crt.sh/?q=07C4AD287B850CAA3DD89656937DB1217067407AA8504A10382A8AD3838D153F
> 
> As bad as it may sound, issuing certs for internal server name from a public 
> chain was allowed until Oct 2015 (as per BR).
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Santhan Raj via dev-security-policy
On Wednesday, June 21, 2017 at 12:02:51 PM UTC-7, Jonathan Rudenberg wrote:
> > On Jun 21, 2017, at 14:41, urijah--- via dev-security-policy 
> >  wrote:
> > 
> > Apparently, in at least one case, the certificate was issued directly(!) to 
> > localhost by Symantec.
> > 
> > https://news.ycombinator.com/item?id=14598262
> > 
> > subject=/C=US/ST=Florida/L=Melbourne/O=AuthenTec/OU=Terms of use at 
> > www.verisign.com/rpa (c)05/CN=localhost
> > issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at 
> > https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
> > reply
> > 
> > Is this a known incident?
> 
> Here is the (since expired) certificate: 
> https://crt.sh/?q=07C4AD287B850CAA3DD89656937DB1217067407AA8504A10382A8AD3838D153F

As bad as it may sound, issuing certs for internal server name from a public 
chain was allowed until Oct 2015 (as per BR).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Jonathan Rudenberg via dev-security-policy

> On Jun 21, 2017, at 14:41, urijah--- via dev-security-policy 
>  wrote:
> 
> Apparently, in at least one case, the certificate was issued directly(!) to 
> localhost by Symantec.
> 
> https://news.ycombinator.com/item?id=14598262
> 
> subject=/C=US/ST=Florida/L=Melbourne/O=AuthenTec/OU=Terms of use at 
> www.verisign.com/rpa (c)05/CN=localhost
> issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at 
> https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
> reply
> 
> Is this a known incident?

Here is the (since expired) certificate: 
https://crt.sh/?q=07C4AD287B850CAA3DD89656937DB1217067407AA8504A10382A8AD3838D153F
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On GitHub, Leaked Keys, and getting practical about revocation

2017-06-21 Thread Hanno Böck via dev-security-policy
On Wed, 21 Jun 2017 10:40:01 -0700 (PDT)
Matthew Hardeman via dev-security-policy
 wrote:

> Through a little Google digging, I find numerous comments and
> references from well informed parties going back quite several years
> lamenting the poor state of support of OCSP stapling in both Apache
> HTTPD and NGINX.  I'm well aware of the rising power that is Caddy,
> but it's not there yet.  The whole ecosystem could be greatly helped
> by making the default shipping versions of those two daemons in the
> major distros be ideal OCSP-stapling ready.

There is some movement here for apache, see discussion over at the
apache dev list:
https://lists.apache.org/thread.html/1a61e9dfbd685c4102b097e8189bccb7d5da39bf9f32fcbe7407a760@%3Cdev.httpd.apache.org%3E

I'm slightly optimistic that we'll have a better stapling
implementation in apache soon.
Also CII is interested in funding efforts that improve the state of ocsp
stapling.

-- 
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Matthew Hardeman via dev-security-policy
On Wednesday, June 21, 2017 at 12:41:53 PM UTC-5, andre...@gmail.com wrote:

> I feel like this is getting sort of off-topic. Web pages can communicate 
> directly with applications on the local machine regardless of whether they 
> abuse certificates in this way or not. (Such as, for example, by using plain 
> old HTTP.) The question of whether or not they should be able to do that is a 
> separate topic IMO.
> 
> Certificate abuse aside, I disagree with your assertion that this is 
> inherently a bad thing. Even if browsers were to block web apps from 
> communicating with localhost, they could still achieve pretty much the same 
> thing by using an external server as a proxy between the web app and the 
> locally installed application. The only major difference is that with that 
> method they'd be using unnecessary internet bandwidth, introducing a few 
> dozen extra milliseconds of latency, and would be unable to communicate 
> offline - all downsides IMO. If you _really_ want to try blocking this 
> anyway, you could always use a browser extension.


Certificate abuse aside, I suppose I have diverged from the key topic.  I did 
so with the best intentions and in an attempt to serve to respond directly to 
the questions raised by the initiator of this thread as to the use case and how 
best to achieve the use case indicated.

Regarding localhost access, you are presently incorrect.  The browsers do not 
allow access to localhost via insecure websocket if the page loads from a 
secure context.  (Chrome and Firefox at least, I believe do not permit this 
presently.)  I do understand that there is some question as to whether they may 
change that.

As for whether or not access to localhost from an externally sourced web site 
is "inherently a bad thing".  I understand that there are downsides to proxying 
via the server in the middle in order to communicate back and forth with the 
locally installed application.  Having said that, there is a serious advantage:

>From a security perspective, having the application make and maintain a 
>connection or connections out to the server that will act as the intermediary 
>between the website and the application allows for the network administrator 
>to identify that there is an application installed that is being manipulated 
>and controlled by an outside infrastructure.  This allows for visibility to 
>the fact that it exists and allows for appropriate mitigation measures if any 
>are needed.

For a website to silently contact a server application running on the loopback 
and influence that software while doing so in a manner invisible to the network 
infrastructure layer is begging to be abused as an extremely covert command and 
control architecture when the right poorly written software application comes 
along.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread andrewm.bpi--- via dev-security-policy
On Wednesday, June 21, 2017 at 11:51:27 AM UTC-5, Matthew Hardeman wrote:
> On Wednesday, June 21, 2017 at 4:59:01 AM UTC-5, Ryan Sleevi wrote:
> 
> > 
> > There are several distinct issues:
> > 127.0.0.0/8 (and the associated IPv6 reservations ::1/128)
> > "localhost" (as a single host)
> > "localhost" (as a TLD)
> > 
> > The issues with localhost are (briefly) caught in
> > https://tools.ietf.org/html/draft-west-let-localhost-be-localhost - there
> > is a degree of uncertainty with ensuring that such resolution does not go
> > over the network. This problem also applies to these services using custom
> > domains that resolve to 127.0.0.1 - the use of a publicly resolvable domain
> > (which MAY include "localhost", surprisingly) - mean that a network
> > attacker can use such a certificate to intercept and compromise users, even
> > if it's not 'intended' to be. See
> > https://w3c.github.io/webappsec-secure-contexts/#localhost
> > 
> > 127.0.0.0/8 is a bit more special - that's captured in
> > https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy
> 
> I agree in full that there are several issues with localhost, the IP forms, 
> the loopback subnets, etc.
> 
> Moreover, within my own mind, I simplify these to a single succinct desire:
> 
> Break that entirely.  There is never a time when I want a page served from an 
> external resource to successfully reference anything on my own device or any 
> directly connected subnet of a non-public nature.  (For IPv6, maybe we call 
> that no external resource should successfully reference or access ANY of my 
> connected /64s).
> 
> My comments above were merely to point out why people are acquiring these 
> certificates on public domains that resolve to 127.0.0.1.
> 
> With the exception of the way Plex is/has been doing it (unique certificate 
> acquired via API from server-side for each system, assigning both a 
> certificate and a unique to the system loopback FQDN), these all involve 
> abuses where a private key for a publicly valid certificate is shared with 
> the end-user systems.
> 
> If for no other reason than that normalizes and mainstreams an improper PKI 
> practice, those certificates should be revoked upon discovery.  (It's called 
> a private key for a reason, etc, etc.)   Obviously there are good causes 
> above and beyond that as well.
> 
> I really am interested in seeing mechanisms to achieve this "necessary use 
> case" murdered.  That the web browser with no prompt and no special 
> permission may interact in a non-obvious way with any other software or 
> equipment on my computer or network is far from ideal.
> 
> The use case is also a nasty kludge.  Why listed with a WebSocket on 
> loopback?  Why not just build a client-side WebSocket connection from the 
> application back to the server side, where the server can then push any 
> administrative commands securely back to that application.  I understand that 
> places more burden server-side, but it also provides a much better understood 
> (by the end users and their IT admins) flow of network communications.
> 
> Frankly, if one is incorporating network stack code in their locally running 
> applications and yet want to use the browser as a UI for controlling that 
> application, there's a simple way of still having a browser UI and 
> controlling that application, even pulling in external resources to inform 
> the UI (if you must) -- embed the browser.  Numerous quite embeddable engines 
> exist and are easily used this way.  In those environments, the application 
> developer can write their own rules to avoid the security measures which get 
> in their way while still only creating a security train-wreck within the 
> context and runtime window of their own application.
> 
> Perhaps I come across as somewhat vehement in my disdain for the purportedly 
> necessary use case that people are breaking this rule to achieve, but that's 
> been informed by my experiences over the years of watching people who have no 
> business doing so making security nightmares out of "clever" network hacks.  
> Among the favorites I've seen implemented was a functioning TCP/IP-over-DNS 
> which was built to provide actual (messy, fragmented, slow, but very 
> functional) full internet access to a site with a captive portal that would 
> still permit certain DNS queries to be answered.  
> 
> Short of intentional circumvention of network controls, the kinds of things 
> that are trying to be achieved by these local WebSocket hacks are indistinct 
> and, I believe, can not be technologically differentiated from the very 
> techniques one would use to have a browser act as a willing slave to a 
> network penetration effort.
> 
> Here I favor public humiliation.   Murder every mechanism for achieving the 
> direct goal of "serving the use case for a need for externally sourced 
> material loaded in the browser to communicate with a running local 
> application".  Issue press releases 

On GitHub, Leaked Keys, and getting practical about revocation

2017-06-21 Thread Matthew Hardeman via dev-security-policy
Hi all,

I'm sure questions of certificates leaked to the public via GitHub and other 
file sharing / code sharing / deployment repository hosting and sharing sites 
have come up before, but last night I spent a couple of hours constructing 
various search criteria which I don't think were even especially clever, but 
still I was shocked and amazed at what I found:

At least 10 different Apple Development and Production environment certificates 
and key pairs.  Of these minimum of 10 that I am counting, I validated that the 
certificate is within its validity period.  Of these, I validated that the key 
I found matches the public key information in the certificate.  Most of these 
certificates are TLS Client Authentication certificates which also have 
additional Apple proprietary extended key usages.  These certificates are 
utilized for authenticating to the Apple Push Notification System.  A couple of 
certificates were Apple Developer ID certificates appropriate for development 
and production environment deployment of executable code to Apple devices.  
(Those developer ID certificates I have reported to Apple for revocation.)  
There were more Apple Push authentication certificates than I cared to write up 
and send over.

I was shocked at the level of improper distribution and leaking of these keys 
and certificates.

Once in a while, the key was represented in encrypted form.  In _every_ 
instance for which I found an encrypted key and dug further, either a piece of 
code, a configuration file, or sometimes a README-KEY-PASSWORD.txt (or similar) 
within the same repository successfully decrypted the encrypted key.

Additionally, I did find some TLS server certificates.  There were many more 
that I did not bother to carefully analyze.  Some were expired.  One was a 
in-validity-window DV certificate issued by Let's Encrypt.  Utilizing the 
certificate's private key, I was able to successfully use the Let's Encrypt 
ACME API to automatically request revocation of that certificate.  Minutes 
later, I verified that OCSP responses for that certificate were, in fact, 
indicating that the certificate was revoked.

Of course, revocation even with a really nice OCSP responder system is not very 
effective today.

I have this suspicion that human nature dictates that eliminating these kinds 
of key material leaks is not even a goal worth having.  Disappointment, I 
suspect, lives down that road.

Because live OCSP checks for certificates en-masse is not appealing to either 
the CAs or the browsers or the end users (consequences of network delay, 
reliability, etc.), revocation means very little pragmatically today.

This only reinforces the value and importance of either/both:

- Quite short lived certificates, automatically replaced and deployed, to 
reduce the risks associated with key compromise

and/or

- OCSP must-staple, which I believe is only pragmatically gated at the moment 
by a number of really poor server-side implementations of OCSP stapling.  
Servers must cache good responses.  Servers must use those while awaiting a new 
good response further into the OCSP response validity period.  Servers must 
validate the response and not server random garbage as if OCSP.  Etc, etc.  
Ryan Sleevi's work documenting the core issues is clearly a step in the right 
direction.

Both NGINX's and Apache HTTPD's implementations of OCSP stapling are lacking in 
several material respects.

It would certainly be a significant undertaking, but I believe that 
organizations who are working to ensure a secure Web (and that reap the 
benefits of a secure and trustworthy web) could do much to achieve better 
deployment of OCSP stapling in relatively short time:

1.  Direct contribution of funds / bounty to the core developers of each of 
those two web server projects for building a server-side OCSP stapling 
implementation which is trivial to configure and which meets the needs of an 
ideal implementation with respect to caching of good results, validating new 
responses to staple, scheduling the deployment of successful new responses or 
scheduling retries of fails, etc.  Insist that the code be written with a view 
to maximal back-port capability for said implementations.

2.  If such contributions are infeasible, funding competent external 
development of code which achieves the same as item 1 above.

3.  High level engagement with major distributions.  Tackle the technical and 
administrative hurdles to get these changes into the stable and development 
builds of all currently shipping versions of at least RedHat's, Canonical's, 
and Debian's distributions.  Get these changes into the standard default 
version httpd and nginx updates.

4.  Same as above but for common docker images, prevalent VM images, etc.

5.  Ensure that the browsers are ready to support and enforce fail-hard on 
certificates which feature the OCSP must-staple extension.

6.  Monitor progress in readiness and incentivize deployment of OCSP 

Re: Unknown Intermediates

2017-06-21 Thread Tavis Ormandy via dev-security-policy
FYI, I'm submitting these right now, it seems to be working, here's an
example

https://crt.sh/?q=1eb6ec6e6c45663f3bb1b2f140961bbf3352fc8741ef835146d3a8a2616ee28f

Tavis.

On Mon, Jun 19, 2017 at 12:56 PM, Tavis Ormandy  wrote:

> I noticed there's an apparently valid facebook.com certificate in there (
> 61b1526f9d75775c3d533382f36527c9.pem). This is surprising to me, that
> seems like it would be in CT already - so maybe I don't know what I'm doing.
>
> Let me know if I've misunderstood something.
>
> Tavis.
>
>
> On Mon, Jun 19, 2017 at 12:41 PM, Tavis Ormandy  wrote:
>
>> Thanks Alex, I took a look, it looks like the check pings crt.sh - is
>> doing that for a large number of certificates acceptable Rob?
>>
>> I made a smaller set, the certificates that have 'SSL server: Yes' or
>> 'Any Purpose : Yes', there were only a few thousand that verified, so I
>> just checked those and found 551 not in crt.sh.
>>
>> (The *vast* majority are code signing certificates, many are individual
>> apple developer certificates)
>>
>> Is this useful? if not, what key usage is interesting?
>>
>> https://lock.cmpxchg8b.com/ServerOrAny.zip
>>
>> Tavis.
>>
>> On Mon, Jun 19, 2017 at 7:03 AM, Alex Gaynor  wrote:
>>
>>> If you're interested in playing around with submitting them yourself, or
>>> checking if they're already submitted, I've got some random tools for
>>> working with CT: https://github.com/alex/ct-tools
>>>
>>> Specifically ct-tools check  will get what
>>> you want. It's all serial, so for 8M certs you probably want to Bring Your
>>> Own Parallelism (I should fix this...)
>>>
>>> Alex
>>>
>>> On Mon, Jun 19, 2017 at 6:51 AM, Rob Stradling via dev-security-policy <
>>> dev-security-policy@lists.mozilla.org> wrote:
>>>
 On 16/06/17 20:11, Andrew Ayer via dev-security-policy wrote:

> On Fri, 16 Jun 2017 10:29:45 -0700 Tavis Ormandy wrote:
>
 

> Is there an easy way to check which certificates from my set you're
>> missing? (I'm not a PKI guy, I was collecting unusual extension OIDs
>> for fuzzing).
>>
>> I collected these from public sources, so can just give you my whole
>> set if you already have tools for importing them and don't mind
>> processing them, I have around ~8M (mostly leaf) certificates, the
>> set with isCa will be much smaller.
>>
>
> Please do post the whole set.  I suspect there are several people on
> this list (including myself and Rob) who have the tools and experience
> to process large sets of certificates and post them to public
> Certificate Transparency logs (whence they will be fed into crt.sh).
>
> It would be useful to include the leaf certificates as well, to catch
> CAs which are engaging in bad practices such as signing non-SSL certs
> with SHA-1 under an intermediate that is capable of issuing SSL
> certificates.
>
> Thanks a bunch for this!
>

 +1

 Tavis, please do post the whole set.  And thanks!

 --
 Rob Stradling
 Senior Research & Development Scientist
 COMODO - Creating Trust Online
 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy

>>>
>>>
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Matthew Hardeman via dev-security-policy
On Wednesday, June 21, 2017 at 4:59:01 AM UTC-5, Ryan Sleevi wrote:

> 
> There are several distinct issues:
> 127.0.0.0/8 (and the associated IPv6 reservations ::1/128)
> "localhost" (as a single host)
> "localhost" (as a TLD)
> 
> The issues with localhost are (briefly) caught in
> https://tools.ietf.org/html/draft-west-let-localhost-be-localhost - there
> is a degree of uncertainty with ensuring that such resolution does not go
> over the network. This problem also applies to these services using custom
> domains that resolve to 127.0.0.1 - the use of a publicly resolvable domain
> (which MAY include "localhost", surprisingly) - mean that a network
> attacker can use such a certificate to intercept and compromise users, even
> if it's not 'intended' to be. See
> https://w3c.github.io/webappsec-secure-contexts/#localhost
> 
> 127.0.0.0/8 is a bit more special - that's captured in
> https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

I agree in full that there are several issues with localhost, the IP forms, the 
loopback subnets, etc.

Moreover, within my own mind, I simplify these to a single succinct desire:

Break that entirely.  There is never a time when I want a page served from an 
external resource to successfully reference anything on my own device or any 
directly connected subnet of a non-public nature.  (For IPv6, maybe we call 
that no external resource should successfully reference or access ANY of my 
connected /64s).

My comments above were merely to point out why people are acquiring these 
certificates on public domains that resolve to 127.0.0.1.

With the exception of the way Plex is/has been doing it (unique certificate 
acquired via API from server-side for each system, assigning both a certificate 
and a unique to the system loopback FQDN), these all involve abuses where a 
private key for a publicly valid certificate is shared with the end-user 
systems.

If for no other reason than that normalizes and mainstreams an improper PKI 
practice, those certificates should be revoked upon discovery.  (It's called a 
private key for a reason, etc, etc.)   Obviously there are good causes above 
and beyond that as well.

I really am interested in seeing mechanisms to achieve this "necessary use 
case" murdered.  That the web browser with no prompt and no special permission 
may interact in a non-obvious way with any other software or equipment on my 
computer or network is far from ideal.

The use case is also a nasty kludge.  Why listed with a WebSocket on loopback?  
Why not just build a client-side WebSocket connection from the application back 
to the server side, where the server can then push any administrative commands 
securely back to that application.  I understand that places more burden 
server-side, but it also provides a much better understood (by the end users 
and their IT admins) flow of network communications.

Frankly, if one is incorporating network stack code in their locally running 
applications and yet want to use the browser as a UI for controlling that 
application, there's a simple way of still having a browser UI and controlling 
that application, even pulling in external resources to inform the UI (if you 
must) -- embed the browser.  Numerous quite embeddable engines exist and are 
easily used this way.  In those environments, the application developer can 
write their own rules to avoid the security measures which get in their way 
while still only creating a security train-wreck within the context and runtime 
window of their own application.

Perhaps I come across as somewhat vehement in my disdain for the purportedly 
necessary use case that people are breaking this rule to achieve, but that's 
been informed by my experiences over the years of watching people who have no 
business doing so making security nightmares out of "clever" network hacks.  
Among the favorites I've seen implemented was a functioning TCP/IP-over-DNS 
which was built to provide actual (messy, fragmented, slow, but very 
functional) full internet access to a site with a captive portal that would 
still permit certain DNS queries to be answered.  

Short of intentional circumvention of network controls, the kinds of things 
that are trying to be achieved by these local WebSocket hacks are indistinct 
and, I believe, can not be technologically differentiated from the very 
techniques one would use to have a browser act as a willing slave to a network 
penetration effort.

Here I favor public humiliation.   Murder every mechanism for achieving the 
direct goal of "serving the use case for a need for externally sourced material 
loaded in the browser to communicate with a running local application".  Issue 
press releases naming and shaming every attempt along the way.

Goodness.  I'll stop now before I become really vulgar.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Andrew Meyer via dev-security-policy
Does anyone have an idea of how good browser support is for the W3C Secure
Contexts standard? Could it be that vendors are abusing certificates in
this way in order to get around communications with loopback addresses
being blocked as insecure mixed content by non-conforming browsers?

On Wed, Jun 21, 2017, 4:59 AM Ryan Sleevi  wrote:

> On Wed, Jun 21, 2017 at 5:32 AM, Matthew Hardeman via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > I believe the underlying issue for many of these cases pertains to
> > initiating a connection to a WebSocket running on some port on 127.0.0.1
> as
> > a sub-resource of an external web page served up from an external public
> > web server via https.
> >
> > I believe that presently both Firefox and Chrome prevent that from
> > working, rejecting a non-secure ws:/ URL as mixed content.
>
>
> There are several distinct issues:
> 127.0.0.0/8 (and the associated IPv6 reservations ::1/128)
> "localhost" (as a single host)
> "localhost" (as a TLD)
>
> The issues with localhost are (briefly) caught in
> https://tools.ietf.org/html/draft-west-let-localhost-be-localhost - there
> is a degree of uncertainty with ensuring that such resolution does not go
> over the network. This problem also applies to these services using custom
> domains that resolve to 127.0.0.1 - the use of a publicly resolvable domain
> (which MAY include "localhost", surprisingly) - mean that a network
> attacker can use such a certificate to intercept and compromise users, even
> if it's not 'intended' to be. See
> https://w3c.github.io/webappsec-secure-contexts/#localhost
>
> 127.0.0.0/8 is a bit more special - that's captured in
> https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Peter Bowen via dev-security-policy
On Wed, Jun 21, 2017 at 7:15 AM, Gervase Markham via
dev-security-policy  wrote:
> On 21/06/17 13:13, Doug Beattie wrote:
>>> Do they have audits of any sort?
>>
>> There had not been any audit requirements for EKU technically
>> constrained CAs, so no, there are no audits.
>
> In your view, having an EKU limiting the intermediate to just SSL or to
> just email makes it a technically constrained CA, and therefore not
> subject to audit under any root program?
>
> I ask because Microsoft's policy at http://aka.ms/auditreqs says:
>
> "Microsoft requires that every CA submit evidence of a Qualifying Audit
> on an annual basis for the CA and any non-limited root within its PKI
> chain."
>
> In your view, are these two intermediates, which are constrained only by
> having the email and client auth EKUs, "limited" or "non-limited"?

What is probably not obvious is that there is a very specific
definition of non-limited with respect to the Microsoft policy.  The
definition is unfortunately contained in the contract, which is
confidential, but the definition makes it clear that these CAs are out
of scope for audits.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Wednesday, June 21, 2017 4:16 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

> In your view, having an EKU limiting the intermediate to just SSL or to just
> email makes it a technically constrained CA, and therefore not subject to
> audit under any root program?

The BRs clearly specify SSL CAs without name constraints are required to follow 
the BRs and must be audited.

> I ask because Microsoft's policy at http://aka.ms/auditreqs says:
> 
> "Microsoft requires that every CA submit evidence of a Qualifying Audit on
> an annual basis for the CA and any non-limited root within its PKI chain."
> 
> In your view, are these two intermediates, which are constrained only by
> having the email and client auth EKUs, "limited" or "non-limited"?
>

Yes, I'd call these Secure mail CAs limited.

> >>> The other customer complies the prior words in the Mozilla policy
> >> regarding "Business Controls".
> 
> By implication, and reading your previous emails, are you saying that the 
> first
> customer does not comply with those words?

The first customer does comply with "business Controls", in our view.  We have 
contracts that specifies what they are allowed to do.

> > That is correct.  Enforcement is via contractual/business controls which is
> compliant with the current policy, as vague and weak as that is (and you've
> previously acknowledged).  Moving from this level of control to being
> audited or having name constraints will take more time that just a couple of
> months.
> 
> Leaving aside the requirements of other root programs, I agree this
> arrangement with the second customer is compliant with our current policy.
> For the new policy, they have 3 options: a) get an audit, b) use a name-
> constrained intermediate, or c) move to a hosted service which limits them
> to an approved set of domains.

Agree, there are options for both of these customers and we're conformable we 
can make this happen within 12 months with another 12 months to keep the CA 
live for cert management and then doing a revocation of the CA.

> Consistent with the principles outlined for Symantec regarding business
> continuity, the fact that GlobalSign does not have the capability to provide 
> c)
> should not be a factor in us determining how long we should allow this
> particular situation to continue.
> 
> It's worth noting that if we had discovered this situation for SSL - that an
> unconstrained intermediate or uncontrolled power of issuance had been
> given to a company with no audit - we would be requiring the intermediate
> be revoked today, and probably taking further action as well.

Agree

> > Two  further points:
> > 1) It’s not clear of email applications work with name constrained CAs.
> Some have reported email applications do not work, however, I have not
> tested this case.
> 
> That sounds like something you might want to investigate as a matter of
> urgency :-)

Are there any other CAs or mail vendors that have tested name constrained 
issuing CAs? If using name constrained CAs don’t work with some or all of the 
mail applications, it seems like we might as well recommend a change to the 
requirement.




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Gervase Markham via dev-security-policy
On 21/06/17 13:13, Doug Beattie wrote:
>> Do they have audits of any sort?
> 
> There had not been any audit requirements for EKU technically 
> constrained CAs, so no, there are no audits.

In your view, having an EKU limiting the intermediate to just SSL or to
just email makes it a technically constrained CA, and therefore not
subject to audit under any root program?

I ask because Microsoft's policy at http://aka.ms/auditreqs says:

"Microsoft requires that every CA submit evidence of a Qualifying Audit
on an annual basis for the CA and any non-limited root within its PKI
chain."

In your view, are these two intermediates, which are constrained only by
having the email and client auth EKUs, "limited" or "non-limited"?

>>> The other customer complies the prior words in the Mozilla policy
>> regarding "Business Controls".

By implication, and reading your previous emails, are you saying that
the first customer does not comply with those words?

> That is correct.  Enforcement is via contractual/business controls which is 
> compliant with the current policy, as vague and weak as that is (and you've 
> previously acknowledged).  Moving from this level of control to being audited 
> or having name constraints will take more time that just a couple of months.  

Leaving aside the requirements of other root programs, I agree this
arrangement with the second customer is compliant with our current
policy. For the new policy, they have 3 options: a) get an audit, b) use
a name-constrained intermediate, or c) move to a hosted service which
limits them to an approved set of domains.

Consistent with the principles outlined for Symantec regarding business
continuity, the fact that GlobalSign does not have the capability to
provide c) should not be a factor in us determining how long we should
allow this particular situation to continue.

It's worth noting that if we had discovered this situation for SSL -
that an unconstrained intermediate or uncontrolled power of issuance had
been given to a company with no audit - we would be requiring the
intermediate be revoked today, and probably taking further action as well.

> Two  further points:
> 1) It’s not clear of email applications work with name constrained CAs.  Some 
> have reported email applications do not work, however, I have not tested this 
> case. 

That sounds like something you might want to investigate as a matter of
urgency :-)

> Both of the customers are large US based companies with contractual 
> obligations to only issue secure email certificates to domains which they own 
> and control so I hope we can come to an agreement on the phased plan.

The size or geographic location of a company is not necessarily
correlated to their competence in handling unconstrained (for email)
intermediate CAs correctly. Our default assumption must be that, without
audit, they don't know how to handle it properly.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-21 Thread Doug Beattie via dev-security-policy


> -Original Message-
> From: Gervase Markham [mailto:g...@mozilla.org]
> Sent: Tuesday, June 20, 2017 9:12 PM
> To: Doug Beattie ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: Root Store Policy 2.5: Call For Review and Phase-In Periods
> > We have 2 customers that can issue Secure Email certificates that are
> > not technically constrained with name Constraints (the EKU is
> > constrained to Secure Email and ClientAuth).> One customer operates
> > the CA within their environment and has been doing so for several
> > years. Even though we've been encouraging them to move back to a Name
> > Constrained CA or a hosted service,
> 
> To be clear: this customer has the ability to issue email certificates for any
> email address on the planet, and they control their own intermediate in
> their own infrastructure?

Yes, but see qualifications below.

> Do they have audits of any sort?

There had not been any audit requirements for EKU technically constrained CAs, 
so no, there are no audits.

> What are their objections to moving to a hosted service?

They are integrated with a Microsoft CA and moving will take some time to 
integrate with a different delivery of certificates.  It will just take some 
time.

> > The other customer complies the prior words in the Mozilla policy
> regarding "Business Controls".  We have an agreement with them where we
> issue them Secure Email certificates from our Infrastructure for domains
> they host and are contractually bound to using those certificates only for the
> matching mail account.  Due to the number of different domains managed
> and fact they obtain certificates on behalf of the users, it's difficult to
> enforce validation of the email address.  We have plans to add features to
> this issuance platform that will resolve this, but not in the near term.
> 
> So even though this issuance is from your infrastructure, there are no
> restrictions on the domains they can request issuance from?

That is correct.  Enforcement is via contractual/business controls which is 
compliant with the current policy, as vague and weak as that is (and you've 
previously acknowledged).  Moving from this level of control to being audited 
or having name constraints will take more time that just a couple of months.  

Two  further points:
1) It’s not clear of email applications work with name constrained CAs.  Some 
have reported email applications do not work, however, I have not tested this 
case. 
2) It’s unlikely that a secure email cert which is not compliant with the NC 
extension would be identified by email applications as non-compliant.  Again, 
this is something I haven't tested either. Maybe some others have first-hand 
knowledge for how email applications work (or not) with NC CAs?

Both of the customers are large US based companies with contractual obligations 
to only issue secure email certificates to domains which they own and control 
so I hope we can come to an agreement on the phased plan.

> Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Ryan Sleevi via dev-security-policy
On Wed, Jun 21, 2017 at 5:32 AM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I believe the underlying issue for many of these cases pertains to
> initiating a connection to a WebSocket running on some port on 127.0.0.1 as
> a sub-resource of an external web page served up from an external public
> web server via https.
>
> I believe that presently both Firefox and Chrome prevent that from
> working, rejecting a non-secure ws:/ URL as mixed content.


There are several distinct issues:
127.0.0.0/8 (and the associated IPv6 reservations ::1/128)
"localhost" (as a single host)
"localhost" (as a TLD)

The issues with localhost are (briefly) caught in
https://tools.ietf.org/html/draft-west-let-localhost-be-localhost - there
is a degree of uncertainty with ensuring that such resolution does not go
over the network. This problem also applies to these services using custom
domains that resolve to 127.0.0.1 - the use of a publicly resolvable domain
(which MAY include "localhost", surprisingly) - mean that a network
attacker can use such a certificate to intercept and compromise users, even
if it's not 'intended' to be. See
https://w3c.github.io/webappsec-secure-contexts/#localhost

127.0.0.0/8 is a bit more special - that's captured in
https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread Matthew Hardeman via dev-security-policy
On Wednesday, June 21, 2017 at 1:43:18 AM UTC-5, jacob.hoff...@gmail.com wrote:
> > It's been an on-going question for me, since the use case (as a software
> > developer) is quite real: if you serve a site over HTTPS and it needs to
> > communicate with a local client application then you need this (or, you
> > need to manage your own CA, and ask every person to install a
> > certificate on all their devices)
> 
> I think it's both safe and reasonable to talk to localhost over HTTP rather 
> than HTTPS, because any party that can intercept communications to localhost 
> presumably has nearly full control of your machine anyhow.
> 
> There's the question of mixed content blocking: If you have an HTTPS host 
> URL, can you embed or otherwise communicate with a local HTTP URL? AFAICT 
> both Chrome and Firefox will allow that: 
> https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e
>  and https://bugzilla.mozilla.org/show_bug.cgi?id=903966. I haven't checked 
> other browsers. Note that you have to use "127.0.0.1" rather than 
> "localhost." See 
> https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-03 for why.
> 
> So I think the answer to your underlying question is: Use HTTP on localhost 
> instead of a certificate with a publicly resolvable name and a compromised 
> private key. The latter is actually very risky because a MitM attacker can 
> change the resolution of the public name to something other than 127.0.0.1, 
> and because the private key is compromised, the attacker can also 
> successfully complete a TLS handshake with a valid certificate. So the 
> technique under discussion here actually makes web<->local communications 
> less secure, not more.
> 
> Also, as a reminder: make sure that the code operating on localhost carefully 
> restricts which web origins are allowed to talk to it, for instance by using 
> CORS with the Access-Control-Allow-Origin header: 
> https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS.

I believe the underlying issue for many of these cases pertains to initiating a 
connection to a WebSocket running on some port on 127.0.0.1 as a sub-resource 
of an external web page served up from an external public web server via https.

I believe that presently both Firefox and Chrome prevent that from working, 
rejecting a non-secure ws:/ URL as mixed content.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-21 Thread jacob.hoffmanandrews--- via dev-security-policy
> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)

I think it's both safe and reasonable to talk to localhost over HTTP rather 
than HTTPS, because any party that can intercept communications to localhost 
presumably has nearly full control of your machine anyhow.

There's the question of mixed content blocking: If you have an HTTPS host URL, 
can you embed or otherwise communicate with a local HTTP URL? AFAICT both 
Chrome and Firefox will allow that: 
https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e
 and https://bugzilla.mozilla.org/show_bug.cgi?id=903966. I haven't checked 
other browsers. Note that you have to use "127.0.0.1" rather than "localhost." 
See https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-03 for 
why.

So I think the answer to your underlying question is: Use HTTP on localhost 
instead of a certificate with a publicly resolvable name and a compromised 
private key. The latter is actually very risky because a MitM attacker can 
change the resolution of the public name to something other than 127.0.0.1, and 
because the private key is compromised, the attacker can also successfully 
complete a TLS handshake with a valid certificate. So the technique under 
discussion here actually makes web<->local communications less secure, not more.

Also, as a reminder: make sure that the code operating on localhost carefully 
restricts which web origins are allowed to talk to it, for instance by using 
CORS with the Access-Control-Allow-Origin header: 
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy