On Fri, Aug 11, 2017 at 1:22 PM, Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, 11 August 2017 16:49:29 UTC+1, Ryan Sleevi  wrote:
> > Could you expand on this? It's not obvious what you mean.
>
> I guess I was unclear. My concern was that one obvious way to approach
> this is to set things up so that after the certificate is signed, Boulder
> runs cablint, and if it finds anything wrong with that signed certificate
> the issuance fails, no certificate is delivered to the applicant and it's
> flagged to Let's Encrypt administrators as a problem.
>
> [ Let's Encrypt doesn't do CT pre-logging, or at least it certainly didn't
> when I last looked, so this practice would leave no trace of the
> problematic cert ]
>
> In that case any bug in certlint (which is certainly conceivable) breaks
> the entire issuance pipeline for Let's Encrypt, which is what my employer
> would call  a "Severity 1 issue", ie now people need to get woken up and
> fix it immediately. That seems like it makes Let's Encrypt more fragile.
>

I'm not sure this is a particularly compelling argument. By this logic, the
most reliable thing a CA can or should do is sign anything and everything
that comes from applicants, since any form of check or control is a
potentially frail operation that may fail.


> > Could you expand on what you mean by "cablint breaks" or "won't complete
> in
> > a timely fashion"? That doesn't match my understanding of what it is or
> how
> > it's written, so perhaps I'm misunderstanding what you're proposing?
>
> As I understand it, cablint is software, and software can break or be too
> slow. If miraculously cablint is never able to break or be too slow then I
> take that back, although as a programmer I would be interested to learn how
> that's done.


But that's an argument that applies to any change, particularly any
positive change, so it does not appear as a valid argument _against_
cablint.

That is, you haven't elaborated any concern that's _specific_ to
certlint/cablint, merely an abstract argument against change or process of
any form. And while I can understand that argument in the abstract -
certainly, every change introduces some degree of risk - we have plenty of
tools to manage and mitigate risk (code review, analysis, integration
tests, etc). Similarly, we can also assume that this is not a steady-state
of issues (that is, it is not to be expected that every week there will be
an 'outage' of the code), since, as code, it can and is continually fixed
and updated.

Since your argument applies to any form of measurement or checking for
requirements - including integrating checks directly into Boulder (for
example, as Let's Encrypt has done, through its dependency on ICU / IDNA
tables) - I'm not sure it's an argument against these checks and changes. I
was hoping you had more specific concerns, but it seems they're generic,
and as such, it still stands out as a good idea to integrate such tools
(and, arguably, prior to signing, as all CAs should do - by executing over
the tbsCertificate). An outage, while unlikely, should be managed like all
risk to the system, as the risk of misissuance (without checks) is arguably
greater than the risk of disruption (with checks)
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to