One note:

As a company that actively supports users with old operating systems and
OS-provided root stores, we have been deliberately including your R1-R3
cross, and are battling problems with a few really old platforms that
plain don't support any certs currently available.  This isn't just
being overcautious about hypothetical compatibility, it's ongoing work
to keep compatibility with a specific list of platforms.

Oh, and as for chain selection, it is important to distinguish between
client and server behaviour.  Ideally, servers would send a compatible
collection while clients would use the first or best valid chain (thus
ending their search when hitting a trusted root DN with matching key).
The AIA-pointing-to-cross trick may work for many GUI browsers, thus
making it useful for servers that only care about those browsers, while
other servers could just continue to send the cross directly.


On 05/08/2019 16:02, Doug Beattie wrote:
Ryan,

...


We have some customers that mandate a complete SHA-256 chain, including the root.  We 
had been using our older SHA-1 Root (R1) and recently moved to our newer SHA-265 
root, (R3).  We now can deliver certificates issued with SHA-256 at all levels, 
great!  In order to support some legacy applications that didn’t have R3 embedded, we 
created a cross certificate R1-R3.  You can get it here 
<https://support.globalsign.com/customer/en/portal/articles/2960968-globalsign-cross-certificates>
 .

The customer came back and said, hey, it still chains to R1, what’s up?  Oh, 
it’s because the client has the cross certificate cached, don’t worry about 
that, some users will see the chain up to R1 and others to R3.  Hmm, not good 
they say.


Even if this specific web site didn’t configure the extra certificate (the 
R1-R3 cross certificate) into their configuration, the end users may have 
picked it up somewhere else and have it cached so their specific chain goes: 
SSL, Intermediate CA, R1-R3 Cross certificate, Root R1



In a



They are stuck with inconsistent user experience and levels of “security” for 
their website.

At present (and this is changing), Chrome uses the CryptoAPI implementation, 
which is the same as IE, Edge, and other Windows applications.

You can read a little bit about Microsoft's logic here:

- 
https://blogs.technet.microsoft.com/pki/2010/05/12/certificate-path-validation-in-bridge-ca-and-cross-certification-environments/

And a little about how the IIS server selects which intermediates to include in 
the TLS handshake here:

- 
https://support.microsoft.com/en-us/help/2831004/certificate-validation-fails-when-a-certificate-has-multiple-trusted-c

The "short answer" is that, assuming both are trusted, either path is valid, 
and the preference for which path is going to be dictated by the path score, how you can 
influence that path score, and how ties are broken between similarly-scoring certificates.

It’s not clear how a CA can influence the path so the “most secure” or “newest” 
one.  Since CAs want to rollover to newer, “better” roots, how do we limit 
clients from continuing to use the older one during the transition?  Is 
creating a cross certificate with a not-before that is equal to or predates the 
new Root permitted?  Is it the only way we can be sure that the new path is 
selected?  Do most/all other web clients also follow this same logic?  Sorry, 
for all the questions.

   * increases TLS handshake packet sizes (or extra packet?), and
   * increases the certificate path from 3 to 4 certificates (SSL, issuing
CA, Cross certificate, Root), which increases the path validation time and
is typically seen as a competitive disadvantage

I'm surprised and encouraged to hear CAs think about client performance. That 
certainly doesn't align with how their customers are actually deploying things, based 
on what I've seen from the httparchive.org <http://httparchive.org>  data 
(suboptimal chains requiring AIA, junk stapled OCSP responses, CAs putting entire 
chains in OCSP responses).

It’s not really the answer I expected, but OK.  Since we don’t control how the 
web sites are configured it’s not clear how CAs can improve this (except for 
your last example).

Assuming some CAs want to provide certificates with optimal performance 
characteristics (ECDSA, shorter chains, smaller certificate size, etc.) it 
seems passing down an extra certificate in the handshake isn’t the best 
approach.  Maybe it’s so far in the noise it’s irrelevant.

As a practical matter, there are understandably tradeoffs. Yet you can allow 
your customers the control to optimize for their use case and make the decision 
best for them, which helps localize some of those tradeoffs. For example, when 
you (the CA) is first rolling out such a new root, you're right that your 
customers will likely want to include the cross-signed version back to the 
existing root within root stores. Yet as root stores update (which, in the case 
of browsers, can be quite fast), your customer could chose to begin omitting 
that intermediate, and rely on intermediate preloading (Firefox) or AIA 
(everyone else). In this model, the AIA for your 'issuing intermediate' would 
point to a URL that contained your cross-signed intermediate, which would then 
allow them to build the path to the legacy root. Clients with your new root 
would build and prefer the shorter path, because they'd have a trust anchor 
matching that (root, key) combination, while legacy clients could still build 
the legacy path.

Can you explain the logic in your last statement above about AIA?  What is the 
Issuing Intermediate in this example, is it the one that is signed by the new 
root but has an AIA pointing to the cross certificate?  Interesting – I never 
thought about including a certIssuer link in a AIA of a CA that was signed by a 
root.

Your last statement (building the shorter path) doesn’t seem to be how Chrome 
does it, or at least that is not how the Chrome certificate viewer displays the 
chain, as discussed above (unless you’re assuming the not-before dates are 
adjusted to be identical)

Yea, this is the hard part.  We’re assuming the web server operator understands 
this and is capable of making the tradeoffs you outlined above.  Generally they 
want the CA to tell them how to install the certificate.

When we change the roots under which we issue SSL certificates, we need to say: 
If you’re not sure if you need the complete interoperability of the old root, 
then install the cross certificate. Even if we say the old root has support for 
a few legacy platforms that are no longer in meaningful use so you should be OK 
without them (Android 1-2, FF 3.0 and earlier, Mozilla 1-2, Safari 1-3, etc.) 
they are likely to install the cross certificate just to be safe.   While we 
can provide them the pros and cons, they don’t want to think about this and 
just want to install their certificate and move forward without impacting any 
current or possible visitors.

I’m curious how Google would handle this.  At what point will you start using the 
Google "GTS Root R1” created in 2016 with a cross certificate back to your 
current Root?  It uses 4K and SHA 384 vs. 2K and SHA-1 in your current root, so 
there seem to be clear advantages for using it.

Do you view these as meaningful issues?  Do you know of any CAs that have
taken this approach?

Definitely! I don't want to sound dismissive of these issues, but I do want to 
suggest it's good if we as an industry start tackling these a bit head-on. I'm 
particularly keen to understand more about how and when we can 'sunset' roots. For 
example, if the desire is to introduce a new root in order to transition to 
stronger cryptography, I'd like to understand more about how and when clients get 
the 'strong' chain or the 'weaker' chain and how that selection may change over 
time. I'm understanding to 4K roots - while I'd rather we were in a world where 2K 
roots were viable because we were rotating roots more frequently (hence the 
above), 4K roots may make sense given the pragmatic realities that these end up 
being used much longer than anticipated. If that's the case, though, it's 
reasonable to think we'd retire roots <4K, and it's reasonable to think we 
don't need multiple 4K roots. That's why I wanted to flesh out these 
considerations and have that discussion, because I'm not sure that just allowing 
folks to select '2K vs 4K' for a particular CA really helps move the needle far 
enough in user benefit (it does, somewhat, but not as much as 'all 4K', for 
example)

My understanding is that both Symantec / DigiCert and Sectigo have pursued 
paths like this, and can speak more. ISRG / Let's Encrypt pursued something 
similar-but-different, but which had the functional goal of reducing their 
dependency on the IdenTrust root in favor of the ISRG root.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to