On Thu, Mar 7, 2019 at 9:47 PM Matthew Hardeman <mharde...@gmail.com> wrote:
> > The actual text of the guideline is quite clear -- in much the same manner > that frosted glass is. > > "Effective September 30, 2016, CAs SHALL generate non-sequential > Certificate serial numbers greater than zero (0) containing at least 64 > bits of output from a CSPRNG. " [1] > Isn’t it amazing how “at least” is one of those requirements where you can look at it, and ask “Should I do the absolute bare minimum, or should I maybe build in safety?” I find it amazing how, when you rely on doing the bare minimum, it can somehow backfire. Irrespective of the discussion underlying the modifications of the BRs to > incorporate this rule, there are numerous respondent CAs of varying > operational vintage, varying size, and varying organizational complexity. > > The history underlying a rule should not be necessary to implement and > faithfully obey a rule. And yet... > It isn’t required. A basic understanding of ASN.1 is all that’s required, combined with critical and defensive thinking. You don’t have to be a CA to have that. As previously provided, there was discussion on m.d.s.p. a year ago about that. You can find discussions on zlint about it [1] [2]. These aren’t skills participating in the discussions here necessarily require, but are absolutely required of CAs operating globally trusted PKIs. Rather than have us theorize as to why non-compliance with this rule seems > to be so widespread, even by a number of organizations which have more > typically adhered to industry best practices, would you be willing to posit > a plausible scenario for why all of this non-compliance has gone on for so > long and by so many across so many certificates? > As noted, it has been called out in the past. You can see issues with how, purely from a linting perspective, the best we can say is something looks wrong, and to have the CAs explain. I think the framing and implication of that last question is profoundly unhelpful and misguided. The answer is that there are a number of CAs continuing to have issues [3], and this is merely a symptom of yet another issue. These issues would be far easier to close out if CAs were consistent in following the expectations of incident reporting, but we continue to see CAs struggle with performing any sort of meaningful introspective analysis. While I don’t want to throw Thomas and the PrimeKey folks under the bus here, it’s clear that the incidents being reported are that CAs are outsourcing their compliance requirements. They have an obligation to review and evaluate the code they use - whether it’s EJBCA, ADCS, UniCERT, or some other stack. Every responsible CA should be having their compliance teams holistically engage in evaluating the software they use, looking for other issues. The incident responses we are seeing demonstrate some of them being proactive in this. It would be absolutely disastrous for a currently trusted CA to demonstrate this issue in 6 months - not on the basis of the single bit, but due to the complete dereliction of professional duty to stay abreast of the industry and compliance that it would represent. Additionally, assuming a large CA with millions of issued certificates > using an actual 64-bit random serial number... Should the CA also do an > exhaustive issued-serial-number search to ensure that the to-be-signed > serial number is not off-by-one in either direction from a previously > issued certificate serial number? However implausible, if it occurred, > this would indeed result in having participated in the issuance of 2 > certificates with sequential serial numbers. > These strawman arguments demonstrate a lack of understanding of the fundamental issue. It’s rather defensible for a CA to issue a one-byte serial number - or even sequential serial numbers - as you hypothesize, while still being compliant with the requirements. If such a matter were to be brought - e.g. to the CA’s problem reporting email - they could examine and determine that, no, they did have 64 bits of entropy, and it was merely the probability that what could happen, would. But that’s not what we’re talking about, and while it is posed as an argumentum ad absurdum, it belies the substance of what is more meaningful: how a CA monitors the discussions, ensures compliance, and investigates issues. A CA that makes a meaningful investigation into the context and history of an issue, or who takes steps to do more than the bare minimum, and takes actions to be beyond reproach, is far, far better for the ecosystem. Incident reports are the opportunity for the CA to demonstrate how it is improving, and for the industry to learn and identify risks and challenges to collectively improve. CAs that promote and encourage that are far more helpful to the ecosystem. Frankly, to some extent, it doesn’t matter whether or not participants here want to debate how well they understood it. It matters whether CAs did - and they are both expected to me as-or-more-knowledgeable than participants here, and to rise to a higher standard of expectations. > [1] https://github.com/zmap/zlint/issues/187 [2] https://github.com/zmap/zlint/pull/112 [3] https://wiki.mozilla.org/CA/Incident_Dashboard > _______________________________________________ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy