Ryan,

 

In your counter proposal, could you list your proposed  milestone dates and 
then for each one specify the max validity period, domain re-use period and Org 
validation associated with those dates?    As it stands, Org validation 
requires CA to verify that address is the Applicant’s address and that 
typically involves a direct exchange with a person at the organization via a 
Reliable Method of Communication.  It’s not clear how we address that if we 
move to anything below a year.

 

 

 

From: Ryan Sleevi <r...@sleevi.com> 
Sent: Friday, March 13, 2020 9:23 PM
To: Doug Beattie <doug.beat...@globalsign.com>
Cc: Kathleen Wilson <kwil...@mozilla.com>; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: About upcoming limits on trusted certificates

 

On Fri, Mar 13, 2020 at 2:38 PM Doug Beattie via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

When we moved to SHA2 knew of security risks so the timeline could be 
justified, however, I don’t see the same pressing need to move to annual domain 
revalidation and 1 year max validity for that matter. 

 

I can understand, and despite several years of effort, it appears that we will 
be just as unlikely to make forward progress. 

 

When we think about the issuance models, we need to keep the Enterprise 
approach in mind where domains are validated against a specific account or 
profile within an account and then issuance can happen using any valid domain 
or subdomain of those registered with the account.  Splitting the domain 
validation from issuance permits different teams to handle this and to manage 
the overall policy.  Domains can be validated at any time by anyone and not 
tied to the issuance of a specific certificate which makes issuance less prone 
to errors.  

 

This is a security risk, not a benefit. It creates significant risk that the CA 
systems, rather than strongly authenticating a request, move to a model of 
weakly authenticating a user or account. I can understand why CAs would prefer 
this, and potentially why Subscribers would too: it's convenient for them, and 
they're willing to accept the risk individually. However, we need to keep in 
mind the User approach in mind when thinking about whether these are good. For 
users, this introduces yet more risk into the system.

 

For example, if an account on a CA system is compromised, the attacker can 
issue any certificate for any of the authorized domains. Compare this to a 
model of fresh authentication for requests, in which the only certificates that 
can be issued are the ones that can be technically verified. Similarly, users 
must accept that a CA that deploys a weak authentication system, any domains 
which use that CA are now at risk if the authentication method used is weak.

 

When we put the user first, we can see that those Enterprise needs simply shift 
the risk/complexity from the Enterprise and to the User. It's understandable 
why the Enterprise might prefer that, but we must not fool ourselves into 
thinking the risk is not there. Root Stores exist to balance the risk 
(collectively) to users, and to reject such attempts to shift the cost or 
burden onto them.

 

If your driving requirement to reduce the domain validation reuse is the 
BygoneSSL, then the security analysis is flawed.  There are so many things have 
to align to exploit domain ownership change that it's impactable, imo. Has this 
ever been exploited? 

 

Yes, this was covered in BygoneSSL. If you meant to say impractical, then your 
risk analysis is flawed, but I suspect we'll disagree. This sort of concern has 
been the forefront of a number of new technologies, such as HTTP/2, Signed 
Exchanges, and the ORIGIN frame. Heck, even APNIC has seen these as _highly 
practical_ concerns: 
https://blog.apnic.net/2019/01/09/be-careful-where-you-point-to-the-dangers-of-stale-dns-records/
 . Search PowerDNS.

 

Would it make sense (if even possible) to track the level of automation and set 
a threshold for when the periods are changed?  Mozilla and Google are tracking 
HTTPS adoption and plan to hard block HTTP when it reaches a certain threshold. 
 Is there a way we can track issuance automation?  I'm guessing not, but that 
would be a good way to reduce validity based on web site administrators embrace 
of automation tools.

 

I'm glad you appreciated the efforts of Google and Mozilla, as well as others, 
to make effort here. However, I think suggesting transparency alone is likely 
to have much impact is to ignore what actually happened. That transparency was 
accompanied by meaningful change to promote HTTPS and discourage HTTP, and 
included making the necessary, sometimes controversial, changes to prioritize 
user security over enterprise needs. For example, browsers would not launch new 
features over insecure HTTP, insecure traffic was flagged and increasingly 
alerted on, and ultimately blocked.

 

I think if that's the suggestion, then the quickest solution is to have the CA 
indicate, within the certificate, that it was issued freshly and automated. UAs 
can then make similar efforts as we saw with HTTPS, as can CAs and other 
participants in the ecosystem. You'll recall I previously proposed this in the 
CA/Browser Forum in 2017, and CAs were opposed then. It'd be useful to 
understand what changed, so that future efforts aren't stymied for three years.

 

If this is not possible, then I think I'd (begrudgingly) agree with some 
comments Ryan made several years ago (at SwissSign's F2F?)  that we need to set 
a longer term plan for these changes, document the reasons/security threats, 
and publish the schedule (rip the band aid off).  These incremental changes, 
one after another by BRs or the Root programs are painful for everyone, and it 
seems that the changes are coming weather the CABF agrees or not.

 

You're absolutely correct that, in 2015, I outlined a plan for 1 year 
certificates, with annual revalidation - 
https://cabforum.org/2015/06/24/2015-06-24-face-to-face-meeting-35-minutes/

 

Thus, it's somewhat frustrating to see it suggested in 2020 that a plan was not 
outlined. Your proposed approach, to get everyone to agree on a plan or 
timeline, is exactly what browsers have been doing for years. That 2015 
conversation was itself the culmination of over a year's effort to attempt to a 
plan. What happens then is the same thing that happens now: a reasonable 
timeframe is set forth, some participants drag out their favorite saws and 
complaints, the discussion takes forever, now the timeline is no longer 
reasonable, it gets updated, and we rinse repeat until the timeframe is 
something approximating the heat death of the universe and CAs are still 
concerned Enterprises may not have time to adjust.

 

As such, I think at this point we can say with full confidence, the probability 
of setting a "longer term plan for these changes, document the reasons/security 
threats, and publish a schedule" is failed, because every attempt results in 
the same circular discussion. Earnest, good faith attempts were made, for 
years, and it vacillates between offering reasons to be opposed (which are 
shown to be demonstrably false) or complaining there's too much information 
about why and it needs to be summarized to be understood. It's clear which side 
of the pendulum this is.

 


If (AND ONLY IF) the resulting security analysis shows the need to get to some 
minimum validity period and domain re-validation interval, then, in my personal 
capacity and not necessarily that of my employer, I toss out these dates:

 

Counter proposal: 

April 2021: 395 day domain validation max

April 2021: 366 day organization validation max 

April 2022: 92 day domain validation max

September 2022: 31 day domain validation max

April 2023: 3 day domain validation max

April 2023: 31 day organization validation max

September 2023: 6 hour domain validation max

 

This sets an entirely timeline that encourages automation of domain validation, 
reduces the risk of stale organization data (which many jurisdictions require 
annual review), and eventually moves to a system where request-based 
authentication is the norm, and automated systems for organization data is 
used. If there are jurisdictions that don't provide their data in a machine 
readable format, yes, they're precluded. If there are organizations that don't 
freshly authenticate their domains, yes, they're precluded.

 

Now, it's always possible to consider shifting from an account-authenticated 
model to a key-authenticated-model (i.e. has this key been used with this 
domain), since that's a core objective of domain revalidation, but I can't see 
wanting to get to an end state where that duration is greater than 30 days, at 
most, of reuse, because of the practical risks and realities of key 
compromises. Indeed, if you look at the IETF efforts, such as Delegated 
Credentials or STAR, the industry evaluation of risk suggests 7 days is likely 
a more realistic upper bound for authorization of binding a key to a domain 
before requiring a fresh challenge.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to