On 15/03/2019 13:26, Peter Gutmann via dev-security-policy wrote:
I actually thought it was from "Chosen-prefix collisions for MD5 and
applications" or its companion papers ("Short chosen-prefix collisions for MD5
and the creation of a rogue CA certificate", "Chosen-Prefix Collisions for MD5
and Colliding X.509 Certificates for Different Identities"), but it's not in
any of those.  Even the CCC talk slides only say "We need defense in depth ->
random serial numbers" without giving a bit count.  So none of the original
cryptographic analysis papers seem to give any value at all.  It really does
seem to be a value pulled entirely out of thin air.

To be honest, I see this as a proxy for competence. Complying with the spec strictly means you're doing the right thing. Complying with the spec minus a tiny margin of error for practical reasons means you're probably fine. Mucking things up due to misunderstood notions of entropy and thus dropping entropy on the floor means you probably shouldn't be writing CA software. The fact that the bar was pulled from thin air is irrelevant; nobody here is suggesting that those using 63 bits of entropy actually *introduced* a tangible security problem. They just didn't follow the BR rules, which are there to be followed.

Thus:

a) If you use >64 bits of CSPRNG output raw, you're fine (assuming any practical CA size).

b) If you use exactly 64 bits of CSPRNG output with duplicate and zero checks, such that the odds of those checks ever *actually* rejecting a serial number based on the number of issued certs are < (say) 1%, then you're fine. For all *practical* intents and purposes you have 64 bits of entropy.

Ideally you'd have used more bits to avoid risking a duplicate serial discarding entropy, but at 64 bits, and doing the math for the birthday problem, the threshold for 1% chance is at about 608 million certificates [1]. If you've issued less than that number, you have a less than 1% chance of having ever rejected a single serial number due to the duplicate check (the zero check is negligible in comparison). It can be argued that if the problematic code never ran, then it might as well have never been there. Of course, *proving* that the code never ran is unlikely to be possible. Ultimately, entropy was reduced by the presence of that code, even if the outcome was identical to that had it not been there.

That said, it's quite *reasonable* to write the code this way; no strange misunderstandings are required. You had 64 bits of entropy and you applied a required check that negligibly reduced that; it's certainly better to lose a tiny fraction of a bit of entropy than to risk a duplicate serial.

c) If you're dropping serials with e.g. the top 8 bits set to zero (as per Daymion's algorithm), then you very clearly have 63.994353 bits of entropy, for no good reason. This is problematic, because it clearly demonstrates a *misunderstanding* of how entropy works and what the whole point of using 64 bits of raw CSPRNG output is. This is a BR violation in any strict reading, and readily provable by looking at serial number distribution.

d) If you're coercing the top bit (EJBCA defaults), then that's clearly 63 bits of entropy, not 64, and of course a BR violation; it doesn't take a cryptanalyst to realize this, and anyone who isn't trying to "creatively interpret" the rules to weasel out of admitting non-compliance can see that 63 < 64 and there's no way to have 64 bits of entropy in a number where one bit never changes.

[1] See https://en.wikipedia.org/wiki/Birthday_problem#Cast_as_a_collision_problem :

>>> math.sqrt(2*(2**64) * math.log(1/(1 - 0.01)))
608926881.2334852

--
Hector Martin "marcan"
Public key: https://mrcn.st/pub
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to