On Wed, Jan 10, 2018 at 3:57 PM, Ryan Sleevi <r...@sleevi.com> wrote:

>
>
> Note that the presumptive discussion re: .well-known has ignored that the
> Host header requirements are underspecified, thus the fundamental issue
> still exists for that too. That said, there absolutely has been both
> tension regarding and concern over the use of file-based or
> certificate-based proofs of control, rather than DNS-based proofs. This is
> a complex tradeoff though - unquestionably, the ability to use the
> certificate-based proof has greatly expanded the ease in which to get a
> certificate, and for the vast majority of those certificates, this is not
> at all a security issue.
>
>
As you note, http-01, as strictly specified has some weaknesses.  That Host
header requirement should be shored up.  Redirect chasing, if any, should
be shored up.  Etc, etc.  I do believe that LE's implementation largely
hedges the major vulnerability.  What vulnerability remains in the mix
requires that a web host literally fail in their duty to protect what
resources may be served up quite literally in the same named label as is
being validated.  The difference, from a web host's perspective, in that
duty versus the duty we would like to impose upon them in TLS-SNI-01 is
that it is commonly expected that the web host will take responsibility for
ensuring that only that customer of theirs paying them for www.example.com
will be able to publish content at www.example.com.  Additionally, all of
the community, the customer, and the web host can all intellectually
understand, without a great deal of complex thought, how that
responsibility for a resource under the correct domain label must be
controlled by the customer.  What's less clear to all, I should think, is
why the web host has a duty not to serve some resource under a totally
unrelated name like rofl.blah.acme.invalid in defense of his customer
www.example.com.

Ultimately, as you suggest, I wonder if the [hehehe] "shocking" conclusion
of all of this is that, perhaps, if we seek to demonstrate meaningful
control of a domain or DNS label, the proper way to do so is by requiring
specific manipulation of only the DNS infrastructure, as, for example, in
dns-01?  DNS infrastructure and its behavior are literally in scope of
demonstration of meaningful control of a domain label.  Any behavior on
part of any web host really technically isn't.  I do understand the reasons
it's presently allowed that non-DNS mechanisms be used.


>
> For comparison of "What could be worse", you could imagine a CA using the
> .10 method to assert the Random Value (which, unlike .7, is not bounded in
> its validity) is expressed via the serial number. In this case, a CA could
> validate a request and issue a certificate. Then, every 3 years (or 2 years
> starting later this year), connect to the host, see that it's serving their
> previously issued certificate, assert that the "Serial Number" constitutes
> the Random Value, and perform no other authorization checks beyond that. In
> a sense, fully removing any reasonable assertion that the domain holder has
> authorized (by proof of acceptance) the issuance.
>

That, indeed, is a chilling picture.  I'd like to think the community's
response to any such stretch of the rules would be along the lines of "Of
course, you're entirely correct.  Technically this was permitted.  Oh, by
the way, we're pulling your roots, we've decided you're too clever to be
trusted."


>
>
>> That being the case, I would recommend that the proper change to the
>> TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
>> of those mechanisms.
>>
>
> I'm not sure I agree that haste is advisable or desirable, but I'm still
> evaluating. At the core, we're debating whether something should be opt-out
> by default (which blacklisting .invalid is essentially doing) or opt-in. An
> opt-in mechanism cannot be signaled in-band within the certificate, but may
> be signalable in-band to the TLS termination, such as via a TLS extension
> or via the use of an ALPN protocol identifier (such as "acme").
>
>
The TLS extension or ALPN protocol seem feasible to secure, though
obviously there's a lot of infrastructure change and deployment to get
there.


>
> As long as the web hosting infrastructure does not automatically create
>> new contexts for heretofore never seen labels, it won't be possible to
>> fully validate in an automated fashion whether or not a given hosting
>> infrastructure would or would not allow any random customer to create some
>> blah.blah.acme.invalid label and bind it to a certificate that said random
>> customer controls.  Because of the various incentives and motivations, it
>> seems almost inevitable that it would eventually occur.  When a
>> mis-issuance arises resulting from that scenario, I wonder how the
>> community would view that?
>>
>
> I'm not sure I'd classify it as misissuance, no more than those who were
> able to get certificates by registering mailboxes such as 'hostmaster' or
> 'webmaster' on free email providers (despite the RFC's that reserve such
> names).
>

Perhaps "misissuance" is the wrong term, in a strict sense.  Maybe instead
we could call it "irresponsible issuance".  What distinguishes, in my mind,
the difference in an issuance subsequent to the described attack on
TLS-SNI-01 versus an attack via HTTP-01 on a web host that has a shared
.well-known directory across all clients is that in the case of the
TLS-SNI-01 exploit, the web host had no pre-existing duty to know that new
web contexts named entirely unrelated to current client contexts could and
would cause security risks for his customers.  It is indisputable that the
web host who shares a world-writeable .well-known directory across all his
clients is doing something wrong and has gone from being a distributor or
data to a publisher of data.  If there's clear failing of a baseline
responsibility of a web host to their customer and that results in a bad
issuance, I think the CA can sleep soundly.  If there is not such a clear
and affirmative duty of a particular behavior on the part of the web host,
and yet an improper third party has managed to finagle a certificate, I
think the CA has to start sweating about such issuance that occurred
because the web host didn't know or didn't want to invest in what you've
called a backwards-incompatible change to the existing "real world".


> While I admit that .invalid (and needing to blacklist) is unquestionably a
> backwards-incompatible change to the 'real world' and, unfortunately, did
> not turn out to be as safe as presumed, the method remains itself in the
> BRs, and as the example showed, can be creatively used (or is it abused?)
> while fully complying with the BRs. Much in the same way a cloud provider
> that allowed unrestricted access to .well-known across hosting accounts, or
> web messaging boards that allowed direct file upload into .well-known, at
> some point, we need to acknowledge that what happened was fully
> permissible, question whether or not it was documented/acknowledged as
> risky (which both the TLS-SNI and .well-known files are called out as such,
> in the ACME draft), and what steps the CA took to assuage, mitigate, or
> minimize those risks.
>
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to