On Thu, Nov 29, 2018 at 2:16 AM Dimitris Zacharopoulos <ji...@it.auth.gr> wrote:
> Mandating that CAs disclose revocation situations that exceed the 5-day > requirement with some risk analysis information, might be a good place > to start. This was proposed several times by Google in the Forum, and consistently rejected, unfortunately. > I don't consider 5 days (they are not even working days) to be adequate > warning period to a large organization with slow reflexes and long > procedures. Phrased differently: You don't think large organizations are currently capable, and believe the rest of the industry should accommodate that. Do you believe these organizations could respond within 5 days if their internet connectivity was lost? > For example, if many CAs violate the 5-day rule for revocations related > to improper subject information encoding, out of range, wrong syntax and > that sort, Mozilla or the BRs might decide to have a separate category > with a different time frame and/or different actions. > Given the security risks in this, I think this is extremely harmful to the ecosystem and to users. It is not the first time we talk about this and it might be worth > exploring further. > I don't think any of the facts have changed. We've discussed for several years that CAs have the opportunity to provide this information, and haven't, so I don't think it's at all proper to suggest starting a conversation without structured data. CAs that are passionate about this could have supported such efforts in the Forum to provide this information, or could have demonstrated doing so on their own. I don't think it would at all be productive to discuss these situations in abstract hypotheticals, as some of the discussions here try to do - without data, that would be an extremely unproductive use of time. > As a general comment, IMHO when we talk about RP risk when a CA issues a > Certificate with -say- longer than 64 characters in an OU field, that > would only pose risk to Relying Parties *that want to interact with that > particular Subscriber*, not the entire Internet. No. This is demonstrably and factually wrong. First, we already know that technical errors are a strong sign that the policies and practices themselves are not being followed - both the validation activities and the issuance activities result from the CA following it's practices and procedures. If a CA is not following its practices and procedures, that's a security risk to the Internet, full stop. Second, it presumes (incorrectly) that interoperability is not something valuable. That is, if say the three existing, most popular implementations all do not check whether or not it's longer than 64 characters (for example), and a fourth implementation would like to come along, they cannot read the relevant standards and implement something interoperable. This is because 'interoperability' is being redefined as 'ignoring' the standard - which defeats the purposes of standards to begin with. These choices - to permit deviations - creates risks for the entire ecosystem, because there's no longer interoperability. This is equally captured in https://tools.ietf.org/html/draft-iab-protocol-maintenance-01 The premise to all of this is that "CAs shouldn't have to follow rules, browsers should just enforce them," which is shocking and unfortunate. It's like saying "It's OK to lie about whatever you want, as long as you don't get caught" - no, that line of thinking is just as problematic for morality as it is for technical interoperability. CAs that routinely violate the standards create risk, because they have full trust on the Internet. If the argument is that the CA's actions (of accidentally or deliberately introducing risk) is the problem, but that we shouldn't worry about correcting the individual certificate, that entirely misses the point that without correcting the certificate, there's zero incentive to actually follow the standards, and as a result, that creates risk for everyone. Revocation, if you will, is the "less worse" alternative to complete distrust - it only affects that single certificate, rather than every one of the certificates the CA has issued. The alternative - not revoking - simply says that it's better to look at distrust options, and that's more risk for everyone. Finally, CAs are terrible at assessing the risk to RPs. For example, negative serial numbers were prolific prior to the linters, and those have issues in as much as they are, for some systems, irrevocable. This is because those systems implemented the standards correctly - serials are positive INTEGERs - yet had to account for the fact that CAs are improperly encoding them, such as by "making" them positive (adding the leading zero). This leading zero then doesn't get stripped off when looking up by Issuer & Serial Number, because they're using the "spec-correct" serial rather than the "issuer-broken" serial. That's an example where the certificate "works", no report is filed, but the security and ecosystem properties are fatally compromised. The alternatives for such implementation are: 1) Reject such certificates (but see above about market forces and interoperability) 2) Correct both the certificate and the CRL/OCSP serial number (which then creates risk because you're not actually checking _any_ certificates true serial) 3) Allow negative serial numbers (which then makes it harder for others to do #1) As I said, CAs have been terrible at assessing risk to the ecosystem for their decisions. The page at https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Things_for_CAs_to_Fix shows how bad such interoperability harms improvements - for example, all of these hacks that Mozilla had to add in order to ship a more secure, more efficient certificate verifier. _______________________________________________ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy