Hi Wayne,

This is helpful and much appreciated!

> On Jul 18, 2023, at 11:15 AM, Wayne Thayer <wtha...@gmail.com> wrote:
> 
> Hi Clint,
> 
> Thank you for helping to unpack my concerns.
> 
> On Mon, Jul 17, 2023 at 2:28 PM Clint Wilson <cli...@apple.com 
> <mailto:cli...@apple.com>> wrote:
>> Hi Wayne,
>> 
>> I’d like to better understand your worry and perhaps interpretation of BR 
>> 6.1.1.3(4) and 4.9.1.1(3,4,16). Just to restate for my benefit, the concern 
>> is that: IF we interpret Tim’s message regarding the testkeys draft as 
>> qualifying the keys present in the draft as “[All] CAs [subscribed to the 
>> Servercert-wg list being] made aware that [a future] Applicant’s Private Key 
>> has suffered a Key Compromise….” THEN, in a similar situation, any 
>> servercert-wg member could share any number of compromised keys here and, 
>> theoretically, bloat (with no upper bounds) the set of known compromised 
>> keys a CA has to retain and check in order to reject certificate requests as 
>> needed to meet the requirements of 6.1.1.3 WHILE also not necessarily 
>> increasing the meaningful security provided by the BRs. Is that correct?
>> As a concrete example (an extreme I could imagine), someone could generate, 
>> and potentially delete, 100 or 100,000,000,000 keypairs easily (for a value 
>> of “easily” most associated with effort rather than time or resources), 
>> share a CSV, or even just pointer to a repository/document, with the 
>> Servercert-wg, and (if interpreted per your worry) cause a bunch of keys 
>> never intended to be used for actual certificate issuance to be forever part 
>> of a set of keys which all CAs must check every received certificate request 
>> against.
>> 
> 
> The magnitude of the problem is not my primary concern, but that is something 
> to consider.

Agreed, though I do think it highlights that there may be multiple weaknesses 
in the current wording of the BRs related to this topic and likely some overlap 
with other “weak key” checking requirements. I apologize for the distinct lack 
of polish this is likely to have, but just to share some personal thoughts on 
the matter, the possible weaknesses that have become clear(er) to me in this 
discussion are:

1. Method and/or means of communication 
I think this is more concretely the primary concern you have at this point or 
at least was the primary point of your initial response.
The BRs only stipulate a required action (rejecting certificate requests and/or 
revoking issued certificates) based on the receipt (being made aware) of 
certain types of communication (proof that a key has been compromised). Trying 
to dig one level deeper and describing a scenario that, I believe, maps to the 
text of the BRs today: 
The sender of the information is not mentioned in these sections of the 
requirements, so the “message" can come from anyone or anywhere
But that message must be communicated to the CA 
“the CA” being another term of some ambiguity in this specific context) 
AND the communication must convey keys — typically just the public key 
component of a key pair whose private key is compromised
receiving private keys themselves qualifies, of course, but is absolutely not 
necessary and should be heavily discouraged (let’s please not go through that 
again…)
AND prove or demonstrate to the CA that the keys have indeed been compromised
I think, in the context of the “compromised keys” we’re talking about, the 
communication with the CA must supply specific keys, not broad classes of keys 
based on some contained attribute or something.
So, essentially, the BRs say that in the event of communication to the CA that 
demonstrates certain keys are compromised, the CA must both revoke any 
certificates they’ve issued which contain those keys and, from that point on, 
reject any certificate requests containing those keys. 
There’s sort of an interesting little interaction here where, hypothetically, a 
certificate request could have been received by a CA prior to the compromised 
key being reported, in which case it seems the CA could still issue the 
certificate, but would then need to revoke it within 24 hours… but I digress.
AFAICT, this is as far as the BRs set requirements on this topic, so it seems 
to me there are reasonably 2 ways this could play out on either a per-CA or 
BR-wide basis from here:
Further specification:
The BRs, act as a governing policy, to the CA’s policy(ies). The CA could 
further specify in their CPS (or other authoritative policy or practices 
document) that the communication referenced in this section must be directed in 
a specific way or to a specific target (i.e. taking “the CA” and mapping it to 
“this form on this website”, taking “made aware” and mapping it to “this report 
shared at this location”, or similar). It seems the CA can thus comply with the 
BRs while also providing additional stipulations of what practices they have in 
place to meet the policy requirements set forth in the BRs. 
Related, I believe there are still limitations on what practices the CA can put 
in place while still complying with the BRs, but this also seems less clearly 
drawn out in the BRs so perhaps this is, in itself, another weakness present in 
this section.
The BRs could themselves further specify that the communication referenced in 
this section must occur in a certain way or meet certain criteria in order to 
trigger the specified actions by the CA.
Notably, these options only work with additional text, authoritative to the 
scope it’s being applied to, which frames the specifics for communicating 
compromised keys. AIUI, such text can’t change the “what” of the requirements 
(e.g. remove the obligation to reject certificate requests for compromised 
keys), it can just provide details on the “how”.
No further specification:
The CA has no further stipulation on how they specifically comply with these 
sections of the BRs, and are thus left with a very loosely defined and broadly 
impacting requirement with which they must comply. The BRs say the CA needs to 
reject and/or revoke if the CA is made aware of compromised keys, and there 
could conceivably be situations in which a CA would need to prove that they 
were not made aware by a given communication.
Using this thread as an example:
It seems possible to argue that the CA is, in fact, not subscribed to or party 
to the emails shared on the servercert-wg list, and that only representatives 
of the CA operating in a role specific to standards development, potentially 
entirely unaffiliated with compliance responsibilities, have been made aware of 
these demonstrably compromised keys, while the CA in amplum remains entirely 
ignorant of the compromised keys. 
It seems similarly possible to argue that the concept of the CA as an 
organization receiving a communication is nebulous and unactionable, that 
representatives of, or people affiliated with, the CA are the only entities 
with which any communication ever really occurs, that it’s unreasonable, if not 
impossible, to expect someone outside the CA to know how/where to communicate 
such information lacking any publicly available guidance from the CA, and 
therefore responsibility falls upon the CA to be able to properly handle and 
route incoming communication to the correct/responsible, but not externally 
identifiable, party.
These aren’t precisely what I’d expect actual arguments to be, nor is my goal 
to produce specific, plausible, or airtight arguments. Rather: two very 
different conclusions seem realistically possible given this scenario (and 
present reality) where no further specification is available beyond the current 
text of the BRs.
After exploring these possibilities (and quite probably missing others), I’m 
personally left with the same conclusion as I shared previously (and it sounds 
like there hasn’t been immediate disagreement with this as a positive 
action/next step either?), which is: further specification is a reasonable way 
to address at least some of the ambiguity around the method and/or means of 
communication necessary to constitute a CA being made aware of compromised keys.
It further seems to me that this additional specification is probably(?) best 
suited for the BRs, but I also haven’t come up with a reason a CA couldn’t do 
something similar themselves in their own document(s). Perhaps some already do 
this?

2. Limitations (or lack thereof) on public key checks
This is more directly related to the scenario I outlined regarding large sets 
of keys being reported as compromised and seems to me to have some common 
properties with the concerns folks have shared related to SC-059, most 
especially Debian weak keys. Since there already appears to be overlap, I’m 
going to include both here; I hope that doesn’t muddy the waters or quality of 
discussion.
Note: in an effort to make the below more generally applicable, I’m 
intentionally ignoring the upper bound represented by the specific number of 
valid certificates issued by a CA.
Currently in the BRs, at least as far as I read in the text — though I’m happy 
to be corrected — there are:
No upper bounds on the potential number of keys a CA may need to check against 
prior to accepting a certificate request, aside from a general policy 
requirement of having been made verifiably aware that each key is already 
compromised or is weak to compromise
No upper bounds on the potential number of certificates a CA may need to revoke 
when made aware of a compromised key or a method to identify a weak key
I think it’s subsumed by the above, but this also means there’s no upper bounds 
to the potential number of certificates whose public keys a CA must verify are 
not weak to known methods of key compromise
No upper bounds on the number of weak keys which may need to be generated in 
order to sufficiently identify impacted keys for the purposes of certificate 
request rejection or certificate revocation
Though functionally this depends on the method of attack, part of the point is 
that checking keys may have different efficiencies for different 
vulnerabilities, and presumably CAs are going to gravitate towards more 
efficient methods… some vulnerabilities may have a relatively inefficient “most 
efficient” method, full-stop.
No temporal upper bounds for retention of known compromised keys or 
identification of weak keys and checking against both sets
No criteria related to assessing or accounting for specific risk(s) associated 
with a given compromised key
This probably encompasses a few things, but I’m specifically thinking of the 
fact that a key does not need to be present in a certificate in order to be 
considered compromised or weak; that is, the attributes “compromised” and 
“weak" are tied to the key pair itself and are irreversible booleans — once 
compromised or weak, always compromised or weak
Along that line, all compromised and weak keys are treated equally assuming CA 
awareness and so long as the key would, absent the property of being 
compromised or weak, otherwise be considered valid
No identified set of compromised keys which all CAs must reject — that is, a 
brand new CA starts with zero compromised keys and each CA “builds" their own 
set of compromised keys.
The potential negative interaction here with mergers, acquisitions, corporate 
(re)structuring, etc. is possibly(?) interesting
Notably, the above does not hold true in the same way for weak keys, where even 
a brand new CA starts with the requirement of being able to identify and 
reject/revoke public keys whose private key can be easily computed based on the 
public key based on any method of which they are aware 
See “Method and/or means of communication" for additional consideration of what 
could constitute a CA being aware, though there’s also likely non-negligible 
differences in the nuances between weak keys (usually broad classes of keys 
identifiable or generate-able algorithmically) and compromised keys (usually 
reported as individual or smaller sets of specific keys, though often 
encompassing very large sets when taken in aggregate.)
No limitations of which possible methods of computing a private key based on a 
public key must be checked by a CA, aside from the requirement of the CA having 
been made aware of such a method
This extends in a way that’s worth calling out to when new methods of 
compromising weak keys are identified, as the requirement is all encompassing 
so long as the CA is aware of the method
No specifics as to allowed temporal delta between the time when a CA is made 
aware of a compromised or weak key and when it must begin rejecting certificate 
requests containing that key
There is a timeline for revocation though, so that’s nice at least
Basically there’s quite a bit that's not in the BRs when it comes to how the 
requirements to reject certificate requests and/or revoke certificates interact 
with compromised and weak keys — and I don’t think I managed to create a 
comprehensive list either (though I also wouldn’t be surprised if it's 
internally duplicative or repetitive.)
So how likely is any or all of this to actually be a major operational problem?
It seems to be at least a minor problem for at least some CAs, based on 
comments around SC-059 and descriptions of how Debian weak key checking already 
constituting a large lift (whether absolute or relative to the perceived value) 
for CAs in processing incoming certificate requests. While I believe the 
concerns shared were primarily anecdotal (meaning I don’t recall that specific 
data was shared e.g. what percentage of certificate request processing time is 
taken up by these compromised/weak key checks in total and/or individually), 
the result of the ballot clearly indicates to me that there is some strain on 
CAs specifically related to the current perceived and implemented requirement 
to check for weak and compromised keys.
These concerns are and were definitely heard. I also want to highlight that I 
personally (also?) subscribe to the view that CAs are not responsible solely 
for verifying the identity placed in a certificate, but play a role in ensuring 
certificates are well-formed, up to and including ensuring the keys being 
certified can be reasonably relied upon to successfully enable secure 
authentication of servers — not to say that certificates are the only thing 
that is relied upon for success in this regard (server software, for example, 
would also need to be sufficiently secure as to not compromise the server’s 
connections over HTTPS or to not bleed the server's private key) — rather I 
hold the view that the certificates also should be reliable and while the key 
comes from the applicant/subscriber, the signature representing certification 
of that key as sound comes from the CA.
This may come across as a mostly philosophical view, and it may be somewhat, 
but it's also the reality of the Web PKI today as I understand it — I can 
appreciate that there are probably as many opinions as there are people 
represented here as to whether and to what extent the BRs should require more 
of CAs than verifying an identity, but presumably we’re in agreement that, 
today, the BRs do require CAs to do more than verify an identity, that today 
the BRs do require CAs to reject certificate requests and revoke certificates 
containing known-to-the-CA weak or compromised keys?
Unlike with the concern around lacking specificity and guidance for the method 
or means of communication, I don’t have a proposed change around these 
requirements. While I'm, hopefully but certainly imperfectly, conscientious of 
what these requirements do and could require, I think the requirements 
themselves are directionally correct. While many conceivable limitations around 
weak key and compromised key checking don’t currently exist, at least some of 
that seems inherent to what keys are: math. There’s an awful lot of numbers out 
there, and we rather intentionally make use of very large numbers and very 
large numbers of numbers. To probably oversimplify this, when the math breaks 
easily — whether because too small of numbers are used, private numbers are 
shared publicly, someone figures out how to get a private number from a public 
number, or any of the other ways crypto can crumble — I think we should try to 
keep those numbers away from pretending to secure peoples' data.

>  
>> Notable to this worry, I think, is that nothing about the language in in the 
>> BRs today indicates to me that Tim’s message or the above, somewhat silly, 
>> scenario would not be interpreted to qualify as a reason to reject those 
>> associated keys. That is, if a CA subscribed to this mailing list and 
>> conforming to the BRs, issued a certificate to a key in the testkeys draft 
>> after July 4, 2023, it seems that the BRs would consider that a misissuance 
>> as there’s no limitation or specification regarding what (or whether) any 
>> specific bar is met in order to constitute “the CA [being] made aware”. 
>> 4.9.3 I think comes quite close, but stops short of saying something like 
>> “For the purposes of requirements in 4.9.1.1, 4.9.1.2, and 6.1.1.3, the CA 
>> MAY require a Certificate Problem Report to be submitted in order to 
>> constitute being made aware of reasons to reject certificate requests or 
>> revoke certificates.” which I think would remove the current ambiguity 
>> regarding what needs to happen in order for a CA to need to begin rejecting 
>> certificate requests for compromised keys. (Note, I’m not saying this change 
>> is a good or well-thought-out idea, just what came to mind as one option to 
>> increase clarity in a way that would address the worry raised.)
>> 
> 
> My understanding is that you believe the BRs require every CA reading this 
> message to add the keys in draft-gutmann-testkeys-04 to their blocklist.  
> That is precisely what I was worried about. I seriously doubt that all CAs 
> have made that same determination. I'm not opposed to your interpretation, 
> but as Tim stated, 'If it is an auditable minimum requirement, we need to be 
> pretty explicitly clear what the minimum bar is.’
More specifically, my interpretation is that if a CA is made aware of any 
compromised key, including, but not limited to, the keys clearly and 
intentionally printed in plaintext in the draft-gutmann-testkeys-04 document, 
then the CA is required to do some things, up to and including rejecting 
certificate requests and revoking certificates containing those keys. I agree 
that it’s unlikely all CAs have made this same determination, but I’m 
nonetheless optimistic that most CAs are fully capable of reasonably assessing 
if they’ve been made aware these testkeys are compromised. Can we improve how 
CAs are made aware of these things, make things more explicit and removing some 
ambiguity? I hope so, and think so, but I also don’t think that there being 
room for improvement is a reason to reject this email list as a pretty decent 
notification.

That said, where clarity seems most valuable to me (but I’d like to hear if 
there’s consensus on this) is with better defining how a CA is made aware of 
something; an “allow-list” of valid channels seems likely to be better, in 
multiple ways, than attempting to build (or assume the existence of) a 
“deny-list” — would that be vainly trying to answer the question of whether 
each of the vast and unconstrained set of communication mediums (skywriting? 
flag signaling?) could qualify? 
So yeah, that’s where I’d prefer to focus, personally, if folks want to ensure 
that defined constraints exist around how a CA is made aware of this stuff.

> 
> On a side note, under your interpretation of 6.1.1.3, even accepting a 
> certificate request for one of the keys contained in that draft is a BR 
> violation.
Specifically, my interpretation of "The CA SHALL reject a certificate request 
if…. [t]he CA has previously been made aware that the Applicant's Private Key 
has suffered a Key Compromise” is that any certificate request which includes 
the public key counterpart of a key known to the CA to be compromised, at or 
prior to the point of receiving said certificate request, must be rejected. So 
really it’s the same as above, in that if the CA has been made aware of these 
compromised keys, they’re required to reject certificate requests for those 
keys.

>  
>> This is separate, in my mind, to any potential interpretation that would 
>> expect CAs to go out and look for compromised keys elsewhere. “Looking" 
>> implies to me a proactive effort, whereas “made aware” is much more passive 
>> and would seemingly include any receipt of information by the CA (or its 
>> official representatives?). More to the point, I don’t see any implication 
>> that CAs should be looking for compromised keys in the current BR text, 
>> which hopefully helps with part of the worry (though adding something like 
>> that as a requirement has been discussed before, iirc, especially in the 
>> context of pwnedkeys.com <http://pwnedkeys.com/> and I could see that, and 
>> related topics, coming up again with 
>> https://www.ietf.org/archive/id/draft-mpalmer-key-compromise-attestation-00.txt).
>> 
> 
> Has everyone reading this now been made aware of pwnedkeys.com? 
> <http://pwnedkeys.com/?>??
I think this may have been intended rhetorically, but I do perceive a 
meaningful difference between a) a service through which a visitor-provided key 
can be assessed for compromise and b) plaintext private keys in an open and 
publicly accessible document and c) which of those meets the requirement that a 
CA be provided with evidence of a reported compromised key actually being 
compromised. Everyone’s been made aware of a website, but not of any specific 
compromised keys. I think much more substantial updates would be needed to 
incorporate a service like pwnedkeys, but as Tim alluded to, if or when we 
manage that, we might also be able to simplify all of this a ton.

I suspect there wouldn't be consensus around this, but if there happened to be, 
maybe it’d be worthwhile to note, perhaps as part of the definition of Key 
Compromise, that a full and complete plaintext private key available publicly 
is inherently compromised?

> 
> Aside from highlighting the ambiguity, I agree that 'been made aware' does 
> not imply 'seeking out'.
I’m glad for this if for no other reason than I’ve opened more than enough cans 
of worms for one week, let alone one day :D

>  
>> While I don’t foresee near-term, major, and negative impact from my 
>> interpretation of the BRs, I do think we can maintain the intent of the 
>> requirement without leaving it as open as a rough analogue to a zip bomb. 
>> While I proposed something purely for illustration above, I’ve also filed 
>> https://github.com/cabforum/servercert/issues/442 to track this if there’s 
>> further interest in ensuring the BRs could address this worry.
>> 
> 
> Thank you for filing that issue. Relying on problem reporting mechanisms is a 
> reasonable solution that might be relatively easy to build consensus around.
If there’s any additional thoughts or feedback, I’d readily welcome those here 
or on GitHub. Maybe most especially if you disagree with the general direction 
of the approach, that’d be great to know sooner than later. Time permitting I 
may try to put together a PR for the change, but if someone else wants to grab 
it, feel free! Given how briefly this discussion has been going on, I don’t 
trust I have a good feel regarding how critical folks think any of this is.

Thanks very much for the engaging and thought-provoking discussion all. I’ve 
gotten a lot out of the perspective shared already, but I hope others have 
benefited in some way as well. Cheers!
-Clint

> 
> - Wayne
> 
>> As always, please let me know if I’ve missed some crucial detail or 
>> interaction here that’s led me to an erroneous conclusion on the topic. 
>> Cheers!
>> -Clint

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Servercert-wg mailing list
Servercert-wg@cabforum.org
https://lists.cabforum.org/mailman/listinfo/servercert-wg

Reply via email to