Re: Japan GPKI Root Renewal Request

2016-07-20 Thread Kathleen Wilson
On Friday, May 20, 2016 at 3:33:56 PM UTC-7, Kathleen Wilson wrote:
> Does anyone have questions, concerns, or feedback on this request from the 
> Government of Japan, Ministry of Internal Affairs and Communications, to 
> include the GPKI 'ApplicationCA2 Root' certificate and enable the Websites 
> trust bit?
> 
> Kathleen

I will greatly appreciate it if someone will review and comment on this request.

As always, I appreciate your thoughtful and constructive feedback.

Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ISRG Root Inclusion Request

2016-07-20 Thread Kathleen Wilson
On Monday, July 18, 2016 at 4:39:46 PM UTC-7, Kathleen Wilson wrote:
> Therefore, I intend to proceed with closing this discussion and 
> recommending approval in the bug.

Thanks to all of you who participated in this discussion about the request from 
ISRG to include the "ISRG Root X1" root certificate, and turn on the Websites 
trust bit. 

I am not aware of any issues that would prevent us from moving forward with 
this request. Therefore, I will recommend approval in the bug.

https://bugzilla.mozilla.org/show_bug.cgi?id=1204656

Any further follow-up on this request should be added directly to the bug.

Thanks,
Kathleen


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About the ACME Protocol

2016-07-20 Thread Patrick Figel
On 20/07/16 04:59, Peter Kurrasch wrote:
> Regarding the on-going development of the spec: I was thinking more 
> about the individual commits on github and less about the IETF 
> process. I presume that most commits will not get much scrutiny but
> a periodic (holistic?) review of the doc is expected to find and 
> resolve conflicts, etc. Is that a fair statement?

Yep, the GitHub repository is not what I would call the canonical source
of the "approved" draft produced by the working group. Implementers
should look at the published drafts (-01, -02, -03) and the final RFC
once it's released.

> The report on the security audit was interesting to read. It's good 
> to see someone even attempted it. In addition to the protocol itself 
> it would be interesting to see an analysis of an ACME server 
> (Boulder, I suppose). Maybe someone will do some pentesting at 
> least?

I'm having difficulties finding a source for this, but I seem to recall
that in addition to the WebTrust audit, ISRG hired an independent
infosec company to perform a pentest/review of boulder. FWIW, I don't
think this is a ACME/Let's Encrypt-specific concern, and I'd personally
be much more worried about the large number of other CAs whose CA
software is closed-source (and thus impossible for anyone to review).

> The 200 LOC is an interesting idea. I assume such an implementation 
> would rely heavily on external libraries for certain functions (e.g. 
> key generation‎, https handling, validating the TLS certificate chain
> provided by the server, etc.)? If so, does anyone anticipate that
> someone will develop a standalone, all-in-one (or mostly-in-one) 
> client? Is a client expected to do full cert chain validation 
> including revocation checks?

acme-tiny[1] would be an example of a client that comes in at just shy
of 200 LOC. Yes, it definitely makes use of other libraries such as
OpenSSL. I'm not exactly sure what you're referring to with chain
validation/revocation checks? Communication with the CA server is using
HTTPS and validating the certificates, if that's what you mean.

> In terms of an overly broad, overly general statement, the protocol 
> strikes me as being too new, too immature. There are gaps to be 
> filled, complexities to be distilled, and (unknown) problems to be 
> fixed. I doubt this comes as new information to anyone but I think 
> there's value in recognizing that the protocol has not had the 
> benefit of time for it to reach it's full potential.

The IETF process might be far from perfect (and certainly not what
anyone would call fast), but it's currently most likely the best and
most secure way for the internet to come up with new protocols. In the
context of publicly-trusted CAs, I personally doubt that any CA has put
in the same amount of effort for any of their internal or external APIs
for certificate issuance, and past examples show this to be true (see
the recent StartCom fiasco). In that context, I don't see why we should
allow CAs to continue using their own proprietary systems for issuance
while at the same time calling ACME too new and immature to be trusted
with the security of the Web PKI.

> The big, unaddressed (or insufficiently addressed) issue as I see
> it‎ is compatibility. This is likely to become a bigger problem
> should other CA's deploy ACME and as interdependencies grow over
> time. Plus, when vulnerabilities are found and resolved,
> compatibility problems become inevitable (the security audit results
> hint at this).
> 
> The versioning strategy of having CA's provide different URL's for 
> different versions of different clients might not scale well.‎ One 
> should not expect all cert applicants to have and use only the
> latest client software. This approach might work for now but it could
> easily become unmanageable. Picture, if you will, a CA that must
> support 20 different client versions and the headaches that can
> bring.

I think you're overestimating the number of incompatible API endpoints
ACME CAs will launch in the first place. There's a good chance this
won't happen at all for Let's Encrypt until the final RFC is released,
at which point we're looking at two endpoints to maintain. In the
meantime, backwards-compatible changes from newer drafts can continue to
be pulled into the current endpoint. Let's Encrypt has recently added
some documentation on this matter[2].

> [...] a separate document to discuss deployment details. A deployment
> doc could also be used to cover the pro's and con's of using one
> server to do both ACME and other Web sites and services. The chief
> concern is if a vulnerability in the web site can lead to remote code
> execution which can then impact handling on the ACME side of the
> fence. Just a thought.

There are a number of other documents that specify operational details
for publicly-trusted CAs, such as the Baseline or Network Security
Requirements. I certainly hope there's something in there that would
prevent CAs from hosting