On Mon, Jan 15, 2018 at 4:40 PM, Doug Beattie <doug.beat...@globalsign.com> wrote:
> > > > > *From:* Ryan Sleevi [mailto:r...@sleevi.com] > *Sent:* Monday, January 15, 2018 4:14 PM > *To:* Doug Beattie <doug.beat...@globalsign.com> > *Cc:* r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org; > Gervase Markham <g...@mozilla.org>; Wayne Thayer <wtha...@mozilla.com> > *Subject:* Re: Possible Issue with Domain Validation Method 9 in a shared > hosting environment > > > > > > > > On Mon, Jan 15, 2018 at 3:36 PM, Doug Beattie via dev-security-policy < > dev-security-policy@lists.mozilla.org> wrote: > > Ryan, > > I’m not sure where we go from here. > > > > As suggested, we encourage you to work on devising technical mitigations > or alternative methods of validating such certificates that can meet the > use case. We don't think that, as described, the OneClick method meets the > necessary level of assurance, nor do the necessary level of mitigating > factors exist, to consider such certificates trustworthy. > > > > Ryan – I’m at a loss. The security threat is that a user can request a > certificate for a domain they don’t own from hosting companies that permit > SNI mappings to domains the user doesn’t own or control. This permits them > to pass validation for a domain they don’t control that is on the same IP > address as their legitimate site (or at least to which they have this level > of SNI control). We will verify that our OneClick customers will request > certificates for domains the hosting company is actively managing for their > users and not permit malicious actions (much like LE verifying that their > hosting companies do not permit “acme.invalid” domains to be used). This > eliminates the problems of SNI being used as an avenue for domain > validation for malicious actors. Is this not sufficient for some reason? > > > > Surely you agree that within non-shared hosting environments OneClick is > not vulnerable and can be used. > No, it's not sufficient. The failure mode unfortunately necessarily includes a failure by GlobalSign process and/or personnel, and in that failure mode, there are further no mitigating factors. - If GlobalSign adds a vulnerable entity to their whitelist - The certificates issued will be valid for 1-3 years, leaving only the (broken) revocation system as recourse - There is no step organizations can take to pre-emptively mitigate the risk of GlobalSign adding to the whitelist (compared to, say, blocking .invalid) - There is no ability for site operators to detect such situations. A consideration, not listed within the full set when discussing Let's Encrypt and the ACME TLS-SNI method, is that we have at least public commitment by Let's Encrypt and demonstrated evidence of sustained/long-term compliance with publicly disclosing all issued certificates ( as noted in https://letsencrypt.org/certificates/ ). While I realize you've offered to do so, I can find no evidence of GlobalSign doing so by default, and so this further adds to the risk calculus of a commitment to do something not yet practice and thus not yet consistently, reliably delivered on. There is not, in our view, reason to accept this significantly greater (holistically considered) risk. We're open to understanding whether GlobalSign has additional proposals how to mitigate this risk, given the set of concerns expressed - technical measures and policy measures. These may provide a path to allowing such issuance in the future, but we don't think that, given the holistic risk assessment, it's appropriate to allow it to immediately resume. We are keen to find solutions that work, as we understand that these can enable powerful new use cases, but we want to balance that with the risk posed. I would encourage GlobalSign to consult Sections 3.2.2.4 and 3.2.2.5 to see if there are any other alternative methods to validating that might represent an appropriate balance. Given that these are posed as automated certificates, there seem that other methods may be suitable for the issuance of Domain Validated certificates - or, with care, Organizational Validated. If it truly is that none of these other methods (outside 3.2.2.4.9 and .10, and understandably the in-discussion-for-removal .1 and .5), are there steps that can be taken that provide concrete technical mitigations (ideally, through an opt-in technical signal by the site operator) that can reduce or eliminate this risk? Are there steps that can be taken with policy to similarly address the concerns? It's not that this is an unsolvable problem, but it's necessary to make further changes to mitigate and minimize the risk, and some of the major factors that contribute to that assessment have been shared. _______________________________________________ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy