On 17/01/2018 21:14, Jonathan Rudenberg wrote:

On Jan 17, 2018, at 14:27, Jakob Bohm via dev-security-policy 
<dev-security-policy@lists.mozilla.org> wrote:

On 17/01/2018 16:13, Jonathan Rudenberg wrote:
On Jan 17, 2018, at 09:54, Alex Gaynor via dev-security-policy 
<dev-security-policy@lists.mozilla.org> wrote:

Hi Wayne,

After some time thinking about it, I struggled to articulate what the right
rules for inclusion were.

So I decided to approach this from a different perspective: which is that I
think we should design our other policies and requirements for CAs around
what we'd expect for organizations operating towards a goal of securing the
Internet as a global public resource.

Towards that goal we should continue to focus on things like transparency
(how this list is run, visibility of audit statements, certificate
transparency) and driving technical improvements to the WebPKI (shorter
certificate lifespans, fewer allowances for non-compliant certificates or
use of deprecated formats and cryptography). If organizations wish to hold
themselves to these (presumably higher) standards for what could equally
well be a private PKI, I don't see that as a problem. On the flip side, we
should not delay improvements because CAs with limited impact on the public
internet struggle with compliance.

I would say, that to limit the danger that such an essentially unused CA 
operator turns rogue, only CAs that provide certificates for public trust 
should be admitted in the future, more on that in another post.

I like this concept a lot. Some concrete ideas in this space:
- Limit the validity period of root certificates to a few years, so that the 
criteria can be re-evaluated, updated, and re-applied on a rolling basis.

This may be fine for TLS root CAs that are distributed in frequently
updated browsers (such as Firefox and Chrome).

It is absolutely fatal for roots that are also used for any of the
following:

- Distributed in browsers that don't get frequent updates (due to
  problems in that distribution channel), such as many browsers
  distributed in the firmware of mobile devices, TVs etc.

Distributing WebPKI roots in infrequently updated software is a bad idea and 
leads to disasters like the issues around the SHA-1 deprecation.


But what should then be done when that infrequently updated software is
in fact a general end user web browser (as opposed to the previously
discussed special cases of certain payment terminals)?  Remove TLS
support?  Trust all certificates without meaningful checks?  Pop up
certificate warnings for every valid certificate?

The way the SHA-1 deprecation was done, with no widely implemented way
for TLS clients to signal their ability to support stronger algorithms,
has in fact created a situation where unreliable hacks are needed to
support older mobile browsers, including feeding unencrypted pages to
some of them.  The public stigma attached to this makes this something
that is rarely discussed in public, but is quietly done by webmasters that need to communicate with those systems.

- Used to (indirectly) sign items of long validity, such as e-mails
  (Thunderbird!), timestamps and software.

I don’t know much about S/MIME, but this doesn’t sound right. Of course 
certificates used to sign emails expire! That’s obviously expected, and I’d 
hope that the validation takes that into account.


The mechanisms vary by recipient software.  But a typical technique
combines a known-unmodified-since date (such as the date of reception or
a date certified by a cryptographic timestamps) to compare the relevant
validity dates in certificates against.

This obviously needs continued trust in the root certificates
that were relevant at that earlier time, including the ability
if the corresponding CAs to publish and update revocation
information after the end certificates expiry date.  (Consider the
case where an e-mail sender's personal certificate was
compromised 1 day before expiry, but that fact was not reported
to the CA until later, thus requiring the CA to publish changed
revocation information for an already expired certificate in
order to protect relying parties (recipients) from trusting
fraudulent signatures made with the compromised key.

- Apply for inclusion in other root programs with slow/costly/
  inefficient distribution of new root certificates (At least one
  major software vendor has these problems in their root program).

This isn’t Mozilla’s problem, and one can come up with a variety of 
straightforward workarounds.

The big problem is that the formats for communicating the certificate
chain from the certificate holder to the relying parties are quite
limited in how they can accommodate different relying parties trusting
different roots from each CA.  Requiring CAs to set up extra workarounds
just to satisfy an arbitrary policy like yours is an unneeded
complication for everybody but Mozilla and Chrome.


- Make all certificates issued by CAs under a root that is trusted for TLS in 
scope for the Baseline Requirements, and don’t allow issuing things like client 
certificates that have much more relaxed requirements (if any). This helps 
avoid ambiguity around scope.

Again this would be a major problem for roots that are used outside web
browsers, including roots used for e-mail certificates (Thunderbird).

The ecosystem already suffers from the need to keep multiple trusted
root certificates per CA organization due to artifacts of existing
rules, no need to make this worse by multiplying this with the number
of certificate end uses.

I’m having trouble seeing how sharding roots based on compliance type is a 
problem. Not doing so complicates reasoning about compliance unnecessarily.


Open the root certificate management interface in most non-Mozilla
software.  It's typically a flat list which is already too long for most
people to work with.   Recent Mozilla products put the certificates into
a hierarchy by organization, but with some mistakes (for example, Google
is not part of Geotrust, Geotrust is part of the Symantec/Digicert
portfolio).

- Limit the maximum validity period of leaf certificates issued to a sane upper 
bound like 90 days. This will help ensure that we don’t rust old crypto and 
standards in place and makes it less likely that a CA is “too big to fail” due 
to a set of customers that are not expecting to replace their certificates 
regularly.

This would be a *major* problem for any end users not using Let's
encrypt, and would seemingly seek to destroy a major advantage of using
a real CA over Let's encrypt.

Obviously this is completely false. Ridiculous diversions about “real” CAs 
aside, many other CAs issue certificates to automated management systems and 
this is obviously the way forward. Humans should not be managing certificate 
lifecycles.


The fact is that human site operators DO manage certificates manually.
Outside Let's encrypt and certain other large automated environments
this is the normal situation.  If automation had already been the norm,
there was no need for Let's encrypt and its sponsors to develop the ACME
protocol and tools, as they could just have reused the existing tools
you seem to think everybody is using.

Humans also manage their personal client/e-mail certificates, unless
tricked into using key-escrowed services.

Furthermore certificates that are fully validated (OV, EV etc.)
generally involve humans at the subscriber end in the validation
process.  The EV BRs give some key examples of such procedures.

An additional manual process is the "simple" act of paying for a
certificate application, which involves not just the transaction done
during the ordering process, but also the subsequent book-keeping job of
putting the transaction into the correct part of the company accounts.

You are assuming a level of automation that just isn't there in the real
world.

The only "too big to fail" problem we had recently was when the oldest,
and biggest CA ever was forced out of business by some very loud people
on this list.  And the problem was that those people demanded rushed
punishment in the extreme with little or no consideration for who would
be hurt in the process.

This is not true. Readers of this list are almost certainly quite familiar with 
the events that caused Google and Mozilla to protect their users from 
Symantec’s failures: https://wiki.mozilla.org/CA:Symantec_Issues


I am very familiar with that page and the long-running discussions,
including most of the twists and turns it took.  Symantec made a lot of
mistakes, but seemed to be trying very hard to clean things up while
staying afloat.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to