On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen <pzbo...@gmail.com> wrote:

> >> For the certificates you identified in the beginning of this thread,
> >> we know they have a certain level of key protection - they are all
> >> required to be managed using cryptographic modules that are validated
> >> as meeting overall Level 3 requirements in FIPS 140.  We also know
> >> that there CAs are monitoring these keys as they have an obligation in
> >> BR 6.2.6 to "revoke all certificates that include the Public Key
> >> corresponding to the communicated Private Key" if the "Subordinate
> >> CA’s Private Key has been communicated to an unauthorized person or an
> >> organization not affiliated with the Subordinate CA".
> >
> >
> > Sure, but we know that such revocation is largely symbolic in the
> existence of these certificates for the vast majority of clients, and so
> the security goal cannot be reasonably demonstrated while that Private Key
> still exists.
> >
> > Further, once this action is performed according to 6.2.6, it disappears
> with respect to the obligations under the existing auditing/reporting
> frameworks. This is a known deficiency of the BRs, which you rather
> comprehensively tried to address when representing a CA member  of the
> Forum, in your discussion about object hierarchies and signing actions. A
> CA may have provisioned actions within 6.2.10 of their CPS, but that's not
> a consistent baseline that they can rely on.
> >
> > At odds here is how to square with the CA performing the action, but not
> achieving the result of that action?
>
> As long as the key is a CA key, the obligations stand.
>

Right, but we're in agreement that these obligations don't negate the risk
posed - e.g. of inappropriate signing - right? And those obligations
ostensibly end once the CA has revoked the certificate, since the status
quo today doesn't require key destruction ceremonies for CAs. I think that
latter point is something discussed in the CA/Browser Forum with respect to
"Audit Lifecycle", but even then, that'd be difficult to work through for
CA keys that were compromised.

Put differently: If the private key for such a responder is compromised, we
lose the ability to be assured it hasn't been used for shenanigans. As
such, either we accept the pain now, and get assurance it /can't/ be used
for shenanigans, or we have hope that it /won't/ be used for shenanigans,
but are in an even worse position if it is, because this would necessitate
revoking the root.

It is, at the core, a gamble on the order of "Pay me now vs pay me later".
If we address the issue now, it's painful up-front, but consistent with the
existing requirements and mitigates the potential-energy of a security
mistake. If we punt on the issue, we're simply storing up more potential
energy for a more painful revocation later.

I recognize that there is a spectrum here on options. In an abstract sense,
the calculus of "the key was destroyed a month before it would have been
compromised" is the same as "the key was destroyed a minute before it would
have been compromised" - the risk was dodged. But "the key was destroyed a
second after it was compromised" is doom.


> > Pedro's option is to reissue a certificate for that key, which as you
> point out, keeps the continuity of CA controls associated with that key
> within the scope of the audit. I believe this is the heart of Pedro's risk
> analysis justification.
> >   - However, controls like you describe are not ones that are audited,
> nor consistent between CAs
> >   - They ultimately rely on the CA's judgement, which is precisely the
> thing an incident like this calls into question, and so it's understandable
> not to want to throw "good money after bad"
>
> To be clear, I don't necessarily see this as a bad judgement on the
> CA's part.  Microsoft explicitly documented that _including_ the OCSP
> EKU was REQURIED in the CA certificate if using a delegated OCSP
> responder (see
> https://support.microsoft.com/en-us/help/2962991/you-cannot-enroll-in-an-online-certificate-status-protocol-certificate
> ).
> Using a delegated OCSP responder can be a significant security
> enhancement in some CA designs, such as when the CA key itself is
> stored offline.
>

Oh, I agree on the value of delegated responders, for precisely that
reason. I think the bad judgement is not trying to find an alternative
solution. Some CAs did, and I think that highlights strength. Other CAs, no
doubt, simply said "Sorry, we can't do it" or "You need to run a different
platform"

I'm not trying to suggest bad /intentions/, but I am trying to say that
it's bad /judgement/, no different than the IP address discussions had in
the CA/B Forum or internal server names. The intentions were admirable, the
execution was inappropriate.


> > The question of controls in place for the lifetime of the certificate is
> the "cost spectrum" I specifically talked about in
> https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13530.html
> , namely "Well, can't we audit the key usage to make sure it never signs a
> delegated OCSP response".
> >
> > As I mentioned, I think the response to these incidents is going to have
> to vary on a CA-by-CA basis, because there isn't a one-size-fits-all
> category for mitigating these risk factors. Some CAs may be able to
> demonstrate a sufficient number of controls that mitigate these concerns in
> a way that some browsers will accept. But I don't think we can treat this
> as "just" a compliance failure, particularly when the failure means that
> the intended result of a significant number of obligations cannot be met
> because of it.
>
> As you pointed out, I pushed for a number of improvements in WebTrust
> audits when I was involved with operation of a publicly trusted CA.
> One of those resulted in practitioner guidance that is relevant here.
> In the guidance at
>
> https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services/practitioner-qualification-and-guidance
> , you can see two relevant items:
>
> - Reporting When Certain Criteria Not Applicable as Services Not Performed
> by CA
> - Disclosure of Changes in Scope or Roots with no Activity
>
> This makes it clear that auditors can report on the absence of
> activity.  In an attestation engagement, the auditor can also provide
> an opinion on statements in the attestation that can be tested,
> regardless of whether they match a specific criteria in the audit
> criteria.


Right, and you know I'm familiar with these options :) I'm thoroughly glad
WebTrust has been responsive to the needs of the ecosystem.

That said, I think it's reasonable to question what exactly the controls
are used, because different auditors' judgement as to the level of
assurance needed to report on those criteria is going to differ, just like
we see different CAs' responses differ.

Let's assume an "ideal" CA behaviour, which would be to ensure that they
use the Detailed Controls Report to include the controls tested as part of
forming the basis of the opinion. Could this provide a level of assurance
appropriate? Very possibly! But ingesting and examining that is simply a
cost that is passed on, from the CA, onto all the Relying Parties, for
which Browsers generally do the bulk of representation of those interests.
Using your example, of including these statements, means that for the
lifetime of the CA associated with that key, browsers would have to be
fastidious in their inspection of reports.

And what happens if things go wrong? Well, we're back to revoking the root.
All we've done is incur greater cost / "trust debt" along the way.


> Given this, I believe relying parties and root programs could
> determine there are sufficient controls via audit reporting.  The CA
> can include statements in the assertion that (as applicable) the keys
> were not used to sign any OCSP responses or that the keys were not
> used to sign OCSP responses for certificates issued by CAs other than
> the CA identified as the subject of the CA certificate where key is
> bound to that subject.
>
> I think (but an auditor would need to confirm) that an auditor could
> be in a position to make statements about past periods based on the
> controls they observed at the time, as recorded in their work papers.
> For example, a CA might have controls that a given key is only used
> during specific ceremonies where all the ceremonies are known to not
> contain inappropriate OCSP response signing.  Alternatively the
> configuration of the HSM and attached systems that the auditor
> validated at the time may clearly show that OCSP signing is not
> possible and the auditor may have observed controls that the key is
> restricted to only be used with the observed system configuration.
>

CAs have proposed this before, but this again falls into an area of whether
or not it provides a suitable degree of assurance. This is, for example,
part of why new CAs coming in to Mozilla's program need to have keys
subjected to the BRs from their creation; because the level of assurance
with respect to these "pre-verification" events is questionable.


> I agree that we cannot make blanket statements that apply to all CAs,
> but these are some examples where it seems like there are alternatives
> to key destruction.
>

Right, and I want to acknowledge, there are some potentially viable paths
specific to WebTrust, for which I have no faith with respect to ETSI
precisely because of the nature and design of ETSI audits, that, in an
ideal world, could provide the assurance desired. However, the act of that
assurance is to shift the cost of supervision and maintenance onto the
Browser, and I think that's reasonable to be concerned. At best, it would
only be a temporary solution, and the cost would only be justified if there
was a clear set of actionable systemic improvements attached with it that
demonstrated why this situation was exceptional, despite being a BR
violation.

This is the cost calculus for CAs to demonstrate in their incident report,
but I think the baseline expectation should be set that it's asking for
significant cost to be taken on by consumers/browsers, as a way of
offsetting the cost to any of the Subscribers of that CA. In some cases, it
may be a justified tradeoff, particularly for CAs that have been exemplary
models of best practices and/or systemic improvements. In other cases, I
think it's asking too much to be taken on faith, and too much credit to be
extended, in what is fundamentally an "unjust" model in which a CA commits
to something, but then when asked to actually deliver on that commitment,
reneges on it
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to