Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-12-08 Thread Ryan Sleevi via dev-security-policy
On Sun, Dec 8, 2019 at 7:14 PM Eric Mill  wrote:

> On Thu, Dec 5, 2019 at 12:34 PM Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> From looking at better security, the 'ideal' path is that modern clients
>> are only trusting modern (new) roots, which never issued old crappy certs.
>> That is, the path "D -> A -> B -> C" is forbidden, while the path "A -> D
>> -> E -> F" is required.
>
>
> It took me a little bit to parse this, but I think I see what you mean -
> that A->D->E->F eliminates any client from having to rely on the old
> non-modern intermediate B, while allowing new clients to only trust D and
> legacy clients to only trust A. Am I summing that up right?
>

It allows clients to remove support for A and B, which may have issued
problematic certificates - and only trust D, which is a 'clean' root to the
latest standards.

This similarly allows for the evolution of root/intermediate certificate
profiles. e.g. if A was issued in 2000, D in 2010, and G in 2020, you can
be sure G reflects the best 'modern' practice.


>
>
>> Further, if you want to excise past crappy certs, a
>> modern browser would require a new root from a CA every few years, such
>> that today's "A -> D -> E -> F" becomes tomorrow's "A -> D -> G -> H -> I"
>> - where "G -> H -> I" is the path that conforms to all the modern security
>> (G will have only issued certs that are modern), while D and A are legacy
>> paths for old clients. In order to keep old clients working, a
>> precondition
>> to browsers doing the modern security thing, you need for *old* clients to
>> be able to get to D/A. And you don't want to send (in the TLS handshake)
>> "I, H, G, D" to cover all your permutations - that is terrible for
>> performance *and* security (e.g. QUIC amplification attacks).
>>
>
> For those of us who don't follow all of the attacks out there, how do
> longer chains promote QUIC amplification attacks? Is it a DoS vector
> similar to NTP amplification? Since obviously minimum chain length can
> vary, is there a threshold of cert length where the QUIC amplification risk
> goes from acceptable to unacceptable?
>

The TL;DR: is that it's standard to UDP protocols, with UDP security
protocols adding a factor to that. So yes, it's similar to UDP, or DNSEC.
https://www.cloudflare.com/learning/ddos/what-is-a-quic-flood/ is helpful.
Drafts like
https://datatracker.ietf.org/doc/draft-ietf-tls-certificate-compression/
exist to attempt to help with this, but there are limits to the compression
that are in part impacted by the certificate profile.


> It sounds like the reason intermediate preloading is an incomplete
> solution is primarily due to name constrained sub-CAs. How big of a
> presence are name-constrained subCAs, in terms of validation volume among
> browser clients which rely on the Web PKI?
>

I wouldn't prioritize it like this into a bucket like "primarily", no. I
think it's necessary to consider the entire ecosystem. As I mentioned, this
concern is primarily with suggesting /disabling/ AIA for players that
already enable it. It's a net-positive for Mozilla to go from nothing to
preloading, but a net-negative for those who do AIA fetching to go to
disabled.

Preloading is functionally moving from a robust and distributed system into
a centralized system. We know centralization can help privacy, by only
making the interaction with a single provider, and hopefully aligned with
users' interests. Mozilla pursuing DNS-over-HTTPS with Cloudflare (for now,
the only DoH provider) is perhaps an example of making that trade-off. But
we wouldn't say preloading is unconditionally good, especially when it
removes a lot of agility from the ecosystem.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-12-08 Thread Eric Mill via dev-security-policy
On Thu, Dec 5, 2019 at 12:34 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> From looking at better security, the 'ideal' path is that modern clients
> are only trusting modern (new) roots, which never issued old crappy certs.
> That is, the path "D -> A -> B -> C" is forbidden, while the path "A -> D
> -> E -> F" is required.


It took me a little bit to parse this, but I think I see what you mean -
that A->D->E->F eliminates any client from having to rely on the old
non-modern intermediate B, while allowing new clients to only trust D and
legacy clients to only trust A. Am I summing that up right?


> Further, if you want to excise past crappy certs, a
> modern browser would require a new root from a CA every few years, such
> that today's "A -> D -> E -> F" becomes tomorrow's "A -> D -> G -> H -> I"
> - where "G -> H -> I" is the path that conforms to all the modern security
> (G will have only issued certs that are modern), while D and A are legacy
> paths for old clients. In order to keep old clients working, a precondition
> to browsers doing the modern security thing, you need for *old* clients to
> be able to get to D/A. And you don't want to send (in the TLS handshake)
> "I, H, G, D" to cover all your permutations - that is terrible for
> performance *and* security (e.g. QUIC amplification attacks).
>

For those of us who don't follow all of the attacks out there, how do
longer chains promote QUIC amplification attacks? Is it a DoS vector
similar to NTP amplification? Since obviously minimum chain length can
vary, is there a threshold of cert length where the QUIC amplification risk
goes from acceptable to unacceptable?


So what you want is "I, H" in the TLS handshake, modern clients to know G,
> and 'legacy' clients to know how to get "G" and "D" as needed. AIA is the
> best way to do that, and the most interoperable way, without requiring
> out-of-band predistribution (to *legacy* clients) about G and D.
>
> I don't disagree on the privacy concerns with AIA, but I think folks are
> overlooking the tradeoffs, complexity, or the fact that today we afford CAs
> much greater flexibility than is perhaps desirable in the structure of
> their PKIs. Intermediate preloading *is* valuable, but it does have
> limitations, and those limitations have consequences to the agility of the
> ecosystem. That's not a problem if a client is going like Firefox, from
> nothing to preloading, but it's much more complex and nuanced if a client
> is going from AIA to preloading, because that's a step to less agility.
>

It sounds like the reason intermediate preloading is an incomplete solution
is primarily due to name constrained sub-CAs. How big of a presence are
name-constrained subCAs, in terms of validation volume among browser
clients which rely on the Web PKI?


> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>


-- 
Eric Mill
617-314-0966 | konklone.com | @konklone 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy