On Mon, Jul 6, 2020 at 1:12 AM Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:

> Ryan's response on
> https://bugzilla.mozilla.org/show_bug.cgi?id=1649939#c8 seems
> unreasonably harsh (too much "bad faith" in affected CAs, even of these
> CA Certificates were operated by the Root Operator).


Then revoke within 7 days, as required. That’s a discussion with WISeKey,
not HARIC, and HARICA needs to have its own incident report and be judged
on it. I can understand wanting to wait to see what others do first, but
that’s not leadership.

The duty is on the CA to demonstrate nothing can go wrong and nothing has
gone wrong. Unlike a certificate the CA “intended” as a responder, there is
zero guarantee about the controls, unless and until the CA establishes the
facts around such controls. The response to Pedro is based on Peter Bowen’s
suggestion that controls are available, and uses those controls.

As an ETSI-audited CA, I can understand why you might balk, because the
same WebTrust controls aren’t available and the same assurance isn’t
possible. The baseline assurance expectation is you will revoke in 7 days.
That’s not unreasonable, that’s the promise you made the community you
would do.

It’s understandable that it turns out to be more difficult than you
thought. You want more time to mitigate, to avoid disruption. As expected,
you’re expected to file an incident report on that revocation delay, in
addition to an incident report on the certificate profile issue that was
already filed, that examines why you’re delaying, what you’re doing to
correct that going forward, and your risk analysis. You need to establish
that why and how nothing can go wrong: simply saying “it’s a CA key” or
“trust us” surely can’t be seen as sufficient.

There are auditable
> events that auditors could check and attest to, if needed, for example
> OCSP responder configuration changes or key signing operations, and
> these events are kept/archived according to the Baseline Requirements
> and the CA's CP/CPS. This attestation could be done during a "special
> audit" (as described in the ETSI terminology) and possibly a
> Point-In-Time audit (under WebTrust).


This demonstrates a failed understanding about the level of assurance these
audits provide. A Point in Time Audit doesn’t establish that nothing has
gone wrong or will go wrong; just at a single instant, the configuration
looks good enough. The very moment the auditors leave the CA can configure
things to go wrong, and that assurance lost. I further believe you’re
confusing this with an Agreed Upon Procedures report.

In any event, the response to WISeKey is acknowledging a path forward
relying on audits. The Relying Party bears all the risk in accepting such
audits. The path you describe above, without any further modification, is
just changing “trust us (to do it right)” to “trust our auditor”, which is
just as risky. I outlined a path to “trust, but verify,” to allow some
objective evaluation. Now it just seems like “we don’t like that either,”
and this just recycled old proposals that are insufficient.

Look, the burden is on the CA to demonstrate how nothing can go wrong or
has gone wrong. This isn’t a one size fits all solution. If you have a
specific proposal from HARICA, filing it, before the revocation deadline,
where you show your work and describe your plan and timeline, is what’s
expected. It’s the same expectation as before this incident and consistent
with Ben’s message. But you have to demonstrate why, given the security
concerns, this is acceptable, and “just trust us” can’t be remotely seen as
reasonable.

We


Who is we here? HARICA? The CA Security Council? The affected CAs in
private collaboration? It’s unclear which of the discussions taking place
are being referenced here.

If this extension was standardized, we would probably not be having this
> issue right now. However, this entire topic demonstrates the necessity
> to standardize the EKU existence in CA Certificates as constraints for
> EKUs of leaf certificates.


This is completely the *wrong* takeaway.

Solving this, at the CABF level via profiles, would clearly resolve this.
If the OCSPSigning EKU was prohibited from appearing with other EKUs, as
proposed, this would have resolved it. There’s no guarantee that a
hypothetical specification would have resolved this, since the
ambiguity/issue is not with respect to the EKU in a CA cert, it’s whether
or not the profile for an OCSP Responder is allowed to assert the CA bit.
This *same* ambiguity also exists for TLS certs, and Mozilla has similar
non-standard behavior here that prevents a CA cert from being a server cert
unless it’s also self-signed.


There was also an interesting observation that came up during a recent
> discussion.


You mean when I dismissed this line of argument? :)

As mandated by RFC 5280 (4.2.1.12), EKUs are supposed to be
> normative constrains to *end-entity Certificates*, not CA Certificates.
> Should RFC 6960 need to be read in conjunction with RFC 5280 and not on
> its own?


Even when you read them together, nothing prohibits them from appearing on
CA certificates, not prohibits their semantic interpretation. As mentioned
above, this is no different than the hypothetical misissued “google.com” CA
cert, by putting a SAN, a TLS EKU, no KU/only CertSign/CRLSign in the EKU,
but adding a SAN. Is that misissued?

The argument you’re applying here would say it’s not misissued: after all,
it’s a CA cert, the KU means it “cannot” be used for TLS, and, using your
logic, the BRs don’t explicitly prohibit it. Yet that very certificate can
be used to authenticate “google.com”, and it shouldn’t matter how many
certificates that hypothetical CA has issued, the security risk is there.

Similarly, it might be argued that you know it hasn’t been used to attack
Google because, hey, it’s in an HSM, at the CA, so what’s the big deal? The
big deal there, as with here, is the potential for mischief and the mere
existence of that cert, for which no control prohibits the CA from MITMing
*using* their HSM.

If you can understand that, then you can understand the risk, and why the
burden falls to the CA to demonstrate.

I support analyzing the practical implications on existing Browser
> software to check if existing Web Browsers verify and accept an OCSP
> response signed by a delegated CA Certificate (with the
> id-kp-OCSPSigning EKU) on behalf of its parent. We already know the
> answer for Firefox.


Note: NSS is vulnerable, which implies Thunderbird (which does not, AFAIK,
use Mozilla::pkix, from what I can tell) is also vulnerable.

Do we know whether Apple, Chromium and Microsoft web
> browsers treat OCSP responses signed from delegated CA Certificates
> (that include the id-kp-OCSPSigning EKU) on behalf of their parent
> RootCA, as valid?


Yes

Some tests were performed by Paul van Brouwershaven
> https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d.


As mentioned, those tests weren’t correct. I’ve provided sample test cases
to several other browser vendors, and heard back or demonstrated that
they’re vulnerable. As are the majority of open-source TLS libraries with
support for OCSP.

This
> does not dismiss any of the previous statements of putting additional
> burden on clients or the security concerns, it's just to assess this
> particular incident with particular popular web browsers. This analysis
> could be taken into account, along with other parameters (like existing
> controls) when deciding timelines for mitigation, and could also be used
> to assess the practical security issues/impact. Until this analysis is
> done, we must all assume that the possible attack that was described by
> Ryan Sleevi can actually succeed with Apple, Chrome and Microsoft Web
> browsers.


You should assume this, regardless, with any incident report. That is,
non-compliance arguments around the basis of “oh, but it shouldn’t work”
aren't really a good look. The CA needs to do their work, and if they know
they don’t plan to revoke on time, make sure they publish their work for
others to check.

It’s not hard to whip up a synthetic response for testing: I know, because
I’ve done it (more aptly, constructed 72 possible permutations of EKU, KU,
and basicConstraints) and tested those against popular libraries or traced
through their handling of these.

Mozilla’s only protection seems to be the CA bit check, not the KU argument
as Corey made, and as noted when it was introduced, doesn’t seem like it
was a prohibited configuration/required check by any specification.

This is why it is important for CAs to reliably demonstrate that the key
cannot be misused, whether revoking at 7 days or longer, and why any
proposal for “longer” needs to come with some objectively verifiable plan
for demonstrating it wasn’t / isn’t misused, and which does something more
than “trust us (and/or our auditor)”. I gave one such proposal to WISeKey,
and HARICA is free to make its own proposal based on the facts specific to
HARICA.

But, again, I want to stress when it comes to audits: they are not magic
pixie dust we sprinkle on and say “voilà, we have created assurance”. It’s
understandable for CAs to misunderstand audits, particularly if they’re the
ones producing them and don’t consume any. Understanding the limitations
they provide is, unfortunately, necessary for browsers, and my remarks here
and to WISeKey reflect knowing where, and how, a CA could exploit audits to
provide the semblance of assurance while causing trouble. ETSI and WebTrust
provide different assurance levels and different fundamental approaches, so
what works for WebTrust may not (almost certainly, will not) work for ETSI.
Unfortunately, when an incident relies on audits as part of the
justification for delays, this means that those limits and differences have
to be factored in, and cannot just be dismissed or hand-waved away.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
              • R... Buschart, Rufus via dev-security-policy
              • R... Ryan Sleevi via dev-security-policy
              • R... Matt Palmer via dev-security-policy
              • S... Peter Gutmann via dev-security-policy
              • R... Pedro Fuentes via dev-security-policy
              • R... Ryan Sleevi via dev-security-policy
              • R... Pedro Fuentes via dev-security-policy
              • R... Ryan Sleevi via dev-security-policy
              • R... Pedro Fuentes via dev-security-policy
              • R... Dimitris Zacharopoulos via dev-security-policy
              • R... Ryan Sleevi via dev-security-policy
              • R... Dimitris Zacharopoulos via dev-security-policy
              • R... Ryan Sleevi via dev-security-policy
              • R... Dimitris Zacharopoulos via dev-security-policy
              • R... Paul van Brouwershaven via dev-security-policy
              • R... Dimitris Zacharopoulos via dev-security-policy
              • R... Paul van Brouwershaven via dev-security-policy
              • R... Rob Stradling via dev-security-policy
              • R... Rob Stradling via dev-security-policy
              • R... Sebastian Nielsen via dev-security-policy
  • Re: SECURITY RELEVANT FOR C... Ben Wilson via dev-security-policy

Reply via email to