On Thu, Dec 5, 2019 at 12:34 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> From looking at better security, the 'ideal' path is that modern clients
> are only trusting modern (new) roots, which never issued old crappy certs.
> That is, the path "D -> A -> B -> C" is forbidden, while the path "A -> D
> -> E -> F" is required.


It took me a little bit to parse this, but I think I see what you mean -
that A->D->E->F eliminates any client from having to rely on the old
non-modern intermediate B, while allowing new clients to only trust D and
legacy clients to only trust A. Am I summing that up right?


> Further, if you want to excise past crappy certs, a
> modern browser would require a new root from a CA every few years, such
> that today's "A -> D -> E -> F" becomes tomorrow's "A -> D -> G -> H -> I"
> - where "G -> H -> I" is the path that conforms to all the modern security
> (G will have only issued certs that are modern), while D and A are legacy
> paths for old clients. In order to keep old clients working, a precondition
> to browsers doing the modern security thing, you need for *old* clients to
> be able to get to D/A. And you don't want to send (in the TLS handshake)
> "I, H, G, D" to cover all your permutations - that is terrible for
> performance *and* security (e.g. QUIC amplification attacks).
>

For those of us who don't follow all of the attacks out there, how do
longer chains promote QUIC amplification attacks? Is it a DoS vector
similar to NTP amplification? Since obviously minimum chain length can
vary, is there a threshold of cert length where the QUIC amplification risk
goes from acceptable to unacceptable?


So what you want is "I, H" in the TLS handshake, modern clients to know G,
> and 'legacy' clients to know how to get "G" and "D" as needed. AIA is the
> best way to do that, and the most interoperable way, without requiring
> out-of-band predistribution (to *legacy* clients) about G and D.
>
> I don't disagree on the privacy concerns with AIA, but I think folks are
> overlooking the tradeoffs, complexity, or the fact that today we afford CAs
> much greater flexibility than is perhaps desirable in the structure of
> their PKIs. Intermediate preloading *is* valuable, but it does have
> limitations, and those limitations have consequences to the agility of the
> ecosystem. That's not a problem if a client is going like Firefox, from
> nothing to preloading, but it's much more complex and nuanced if a client
> is going from AIA to preloading, because that's a step to less agility.
>

It sounds like the reason intermediate preloading is an incomplete solution
is primarily due to name constrained sub-CAs. How big of a presence are
name-constrained subCAs, in terms of validation volume among browser
clients which rely on the Web PKI?


> _______________________________________________
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>


-- 
Eric Mill
617-314-0966 | konklone.com | @konklone <https://twitter.com/konklone>
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to