(I've changed the subject line to better reflect the direction I'm taking this discussion.)

Nelson B wrote re the alleged inevitability of subjectivity in evaluating CAs:
I don't see it as inevitable.  Impose a floor, an objective one.
You've proposed one, described as "the LCP from TS 102 042".
I've not seen that document, so I cannot speak to its suitability,
but if you think it meets my concerns, I'd say we should use it as a
starting point for the floor.

OK, I've thought about this some more, and I'm going to "think out loud" here, in the hopes that this may lead to some progress on this issue; my apologies for rambling on at length:


With regard to CAs I think we can usefully divide the issues into two general areas: how CA's vet subscribers (i.e., people applying for certs), and how CAs operate in other respects (e.g., signing key protection, continuity of operations, etc.). Let's also assume for the moment that in practice we don't need to worry as much about CA operations issues if the CA has completed audits/evaluations according to the WebTrust, X9.79, or ETSI criteria.

(Some might question that assumption, but I think there's more controversy around subscriber vetting, since it's so closely related to the phishing threat, so I'm going to focus on that for now. Also, it's possible that the LCP in ETSI TS 102 042 may be suitable as a "floor" for CA operations-related issues apart from subscriber vetting; I need some other people's opinions on this. But in any case I need to bound the problem at hand in order to make progress.)

With subscriber vetting I think we can classify potential sources of concerns into three cases as follows, using SSL certs as an example since it's relevant to the phishing problem; I've given each of the following three cases names so that I can easily refer to them later:

Case 1 ("CA knows otherwise"): The CA knowingly issues server certs to entities who do not own the domains in question and are not otherwise acting as authorized agents of the domain owners. (For the sake of argument we're assuming here that such issuance is not contrary to the CA's stated policies, and hence it's conceivable that the CA might be able to pass an audit/evaluation focused on the CA's compliance to its policies.)

Case 2 ("CA knows nothing"): The CA does absolutely no checking whatsoever that applicants for a server cert own the associated domains (or are acting as authorized agents for the domain owners); in practice the applicants might or might not own the domains, the CA doesn't know either way. (Again, we assume that this is per stated CA policy.)

Case 3 ("CA doesn't know enough"). The CA does some amount of checking to verify that applicants own the domains (or are authorized agents), but in our opinion what the CA does is "not enough", however we define that. (And again we assume that whatever the CA is doing is per its stated policies.)

For each of these cases our concerns are really with the substance of the policies under which the CA operates, not whether or not the CA is operating in accordance with those policies.

Now we could certainly try to say in our policy that certain CA policies are not acceptable and would be grounds for rejecting the CA. But first let's take each case in turn and look at three questions:

1. Is the CA behavior in question definitely something we want to reject categorically, or are there CA use cases that we'd consider legitimate and relevant, at least as far as typical users are concerned?

2. How would we craft policy language to accomplish rejection of illegitimate use cases (i.e., illegitimate from our point of view) while not rejecting legitimate use cases?

3. Are the provisions of our policy in sync with the nature of the respective threats?

I'll consider each case in turn.


"CA knows otherwise"

This case seems like it's uniformly a bad thing; after all a CA operating under a "knows otherwise" policy would deliberately and knowingly issue me or anyone else an SSL server cert for www.paypal.com or www.citibank.com whatever. Thus a policy that rejected "knows otherwise" CAs would be consistent with protecting typical users against a plausible threat, namely phishing attacks.

This case is also pretty straightforward to craft policy language for; the key is the "knowing" nature of the CA's actions, which in this case would presumably be disclosed in the Certificate Policy or Certification Practice Statement. (Otherwise the CA would be acting contrary to the CP/CPS, and should not be able to pass an audit/evaluation.) "Knowing" and "knowingly" are good policy words: their meaning is reasonably clear and (at least as used here) is not much subject to interpretation.

Would rejecting "knows otherwise" CAs inadvertently have a negative impact on legimitate uses? For example, way back in my early days in Netscape, when SSL was still new, a prospective customer asked me if there was a way they could monitor their employees' network activities, including in particular outbound SSL connections from their network to external networks. (I'll leave the organization unnamed, for obvious reasons.) I of course informed them that this was contrary to the nature of SSL, which was intended as an end-to-end protocol. However it's also possible to imagine such an organization setting up a special proxy server inside their firewall that didn't just do SSL tunneling (as in standard proxies), but rather served as a terminating point for SSL connections, with the proxy then initiating separate SSL connections to the true end destinations.

In implementing this setup (which really amounts to an authorized MITM) the organization could have the proxy impersonate the true end points, in the sense of returning SSL server certs that appear to be associated with the external servers/domains, but are actually tied to a private key on the proxy and issued by the organization's own CA. Employees would of course have installed the organization's root CA cert in their browsers, have configured the proxy, and presumably also have consented to monitoring.

Now, what would we do with a (hypothetical) request to add this organization's CA cert to Mozilla/Firefox/etc? Well, we could certainly reject it as not being relevant to typical Mozilla users, being an intranet CA and not a public CA. Even if it were an Internet-based service then it's still arguably not relevant to typical users, since the only people who would really need the cert are people configured to use the proxy in question.

However we'll go an extra step and assume for the sake of argument that the CA in question issues "real" certs of interest to typical users, in addition to the "fake" certs described here. If we still wanted to reject including this CA cert (and I presume we would, based on the phishing-related concerns) then we'd need either "catch all" language as I've previously described (i.e., to allow rejection based on general security concerns) or a specific policy provision applying to the "CA knows otherwise" case.

Are there other possible "CA knows otherwise" use cases that at least some people might consider legitimate, and that a policy might have to allow for? I don't know -- my imagination is probably too limited. But I'll assume for now that the answer is "no", and that having our policy specifically address the "knows otherwise" case is both possible and desirable.


"CA knows nothing"

Now let's turn to the case where the CA does not vet subscribers at all, so, for example, subscribers are free to apply for an SSL server cert under any domain name whatsoever. The practical effect in terms of enabling potential phishing attacks is pretty much the same, but the underlying CA intent may be different, so I've classified this as a different case.

(Not that I'm considering legal issues here, but this distinction is reminiscent of the distinctions that people draw in the case of P2P systems between a P2P network provider knowingly infringing copyrights and such a provider simply providing a service with no detailed knowledge about what people are using it for. Some people -- like lawyers for the RIAA -- claim that this is a distinction without a difference, but I and others would disagree, given the potential for subtantial non-infringing uses of P2P networks -- like distributing copies of Firefox.)

Given the implications for phishing, one could argue that we should also reject a CA whose policies permit "knows nothing" issuance of certs. How could one do so in terms of the policy language (considering only the SSL server case for now)? Perhaps we could require that CAs implement "reasonable measures" to verify that applicants own the domains associated with the certs (or are acting as authorized agents for the owners). What exactly are "reasonable measures"? That's subjectivity creeping in again. To go beyond that we really move into the area of implementation and what is "enough" vs. "not enough", so I'll postpone that discussion to the next and final case.

(Incidentally, this is another reason why I'm treating the "CA knows otherwise" case as different from the "CA knows nothing" case, because the relevant policy language would be different.)

If we decided to reject CAs with "knows nothing" policies, could that negatively impact legitimate use cases relevant to typical users? One possible legitimate use case I can think of is a free Internet-based "test CA" service that crypto developers can use to get arbitrary certs
generated for use in testing their PKI-enabled software. We could certainly reject such a CA's application (assuming we wished to do so) based on it's not being relevant to typical Mozilla users. What if this test CA were combined with a more typical CA under the same root? Then we're in the same situation I described with the "CA knows otherwise" case: we'd have to take advantage of "catch all" policy language allowing rejection for general reasons having to do with security risk, or we'd have to have specific policy language addressing this.


Are there other possible "CA knows nothing" use cases that at least some people might consider both legitimate and of interest to typical users, and that a policy might have to allow for? Again, I don't know, but I'll assume for now that the answer is "no", and that having the policy specifically address the "CA knows otherwise" case is desirable, although doing so is not as straightforward as in the "knows otherwise" case. So on to the final case...


"CA doesn't know enough"

Last (but not least) we consider the case of CAs whose policies involve some sort of vetting of subscribers, but where in our opinion we don't believe that the vetting is "good enough". This is an area where I believe subjectivity is inevitable; let me be absolutely clear on what I mean by this:

Certainly we can come up with some set of specific and "objective" requirements on what CAs should do to vet subscribers: "provide full name, address, date of birth, etc.", "show up in person with passport or other national identity card", "provide evidence of organizational affiliation on organization letterhead", and so on. That is what lots of people have done, including ETSI and the Electronic Authentication Partnership. But that is most emphatically *not* the point I am making; the point of our proposed CA cert policy is *not* to be a "how to prove your identity" checklist.

Rather the point is: how do we decide that a given set of measures to vet CA subscribers -- the set of measures that we presumably want to enshrine in our policy -- is the minimal set that is "good enough", and that dropping even one minor element from that minimal set makes the vetting "not good enough"? IMO we can make that determination only in the context of a specific threat (or set of threats) as applied to particular use cases.

And that is where I think subjectivity is inevitable: you have to make a somewhat subjective assessment on what the threats are, how likely they are, and how serious they are. You also have to make a somewhat subjective assessment of what use cases are legitimate and relevant, and thus should be provided for in the policy, and what use cases are illegitimate and/or irrelevant, and hence can be ignored as far as the policy is concerned. IMO you can't completely apply analytical and deductive processes here, even if you have some hard data regarding threats, etc., because people may be proceeding from different axioms in terms of their values and beliefs regarding the threats and use cases -- for example, what one person considers to be an illegitimate or irrelevant use case another person might consider to be a (or even the) major reason to use the product.

If we simply adopt sample vetting criteria (e.g., as outlined in the LCP and NCP in ETSI TS 102 024 or elsewhere) then IMO we are not addressing the true underlying problem: We run the risk of adopting criteria that are out of line with respect to the relevant threats, and by ignoring context we run the risk of negatively impacting legitimate use cases that are relevant to typical users.

Does this mean that it's hopeless to develop policy language relating to vetting requirements? No, I don't, and in a followup message I'll propose such criteria. But I do believe that you have to tailor such criteria to the different use cases (which in our context relates to the types of certs issued by a CA and the contexts in which they might be used). I also believe that you can't prove analytically and in advance that a particular set of required CA practices is a minimal "good enough" set, and hence I think the policy language inevitably has to be somewhat abstract and leave room for subjective decisions that we might have to make on a case by case basis.

So, to summarize, here's the lines along which I'm thinking at the moment:

1. It is desirable and possible to have policy language allowing rejection of CAs with "knows otherwise" policies and practices.

2. It is desirable to have policy language allowing rejection of CAs with "knows nothing" policies and practices, with the exact language depending on how we approach the "not good enough" case. (Since the instant that we say "CAs must vet subscribers" then we immediately raise the question of which types of vetting are "good enough" and which are not.)

3. It is possible to have policy language addressing the question of whether CA's vetting of subscribers is "good enough", but it likely will prove to be impossible to completely eliminate subjectivity in the implementation of such policy language (i.e., in determining whether a particular CA passes the test or not).

4. Any policy language needs to take into account and separately address the possible use cases and the relevant threats for those use cases. For this policy we have three overall categories of use cases -- for email certs, SSL server certs, and object signing certs -- and then multiple use cases within those overall categories (e.g., for SSL server certs we have HTTP/SSL for web sites vs. IMAP/SSL for email servers).

Now let's see if I can crank out the next message right away, and not keep you all in suspense :-)

Frank

--
Frank Hecker
[EMAIL PROTECTED]
_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto

Reply via email to