Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jan 10, 2018 at 10:42 PM, Ryan Sleevi  wrote:
>
>
> I do not know why you say that, considering the Forum explicitly decided
> to make .10 flexible as it is to accommodate both solutions.
>
> The goal was explicitly NOT to make an ideal-secure solution, it was to
> document what is practiced in favor of replacing “any other method”
>
> To that end, it is more useful to point out, “As written, X is
> permissible, but not desired, while restricting to Y reduces that risk”.
> The goal is honestly less to provide solutions (“I think it should be
> this”) and more to provide risk assessments and suggestions. The latter is
> far more beneficial for walking folks through the risks and concerns and
> how to mitigate.
>

Ouch.  I was not aware of that aspect of the historical part of the
picture.  What I recall most was there there was some IPR drama over some
of the blessed methods.

So, essentially, the bargain that was struck was something along the lines
of "Confess your validation method sins and let them -- at least for a time
-- be blessed,  as long as they're not entirely egregious and in exchange
for killing the ability to hide behind `or any other method`?"
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 11, 2018 at 1:36 AM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wednesday, January 10, 2018 at 6:17:34 PM UTC-6, Ryan Sleevi wrote:
> > On Wed, Jan 10, 2018 at 5:53 PM, Matthew Hardeman 
> > wrote:
> > >
> > > That, indeed, is a chilling picture.  I'd like to think the community's
> > > response to any such stretch of the rules would be along the lines of
> "Of
> > > course, you're entirely correct.  Technically this was permitted.  Oh,
> by
> > > the way, we're pulling your roots, we've decided you're too clever to
> be
> > > trusted."
> > >
> >
> > GlobalSign proposed this as a new method -
> > https://cabforum.org/pipermail/validation/2017-May/000553.html
> > Amazon pointed out that .10 already permitted this -
> > https://cabforum.org/pipermail/validation/2017-May/000557.html
> >
> > Your reaction means you must be one of the "worrywarts who treat
> > certificate owners like criminals" though, in the words of Steve Medin of
> > Symantec/Digicert -
> > https://cabforum.org/pipermail/validation/2017-May/000554.html , who was
> > also excited because of the 'brand stickiness' it would create (the term
> > typically used to refer to the likelihood or difficulty for someone to
> > switch to another, potentially more competent CA - in this case, due to
> the
> > ease of the lower security)
>
> Wow.  The economic incentives for behaving badly clearly were at work in
> those.
>
> I think I am one of those worrywarts, in fact.
>
> Also, I just reread and contemplated the .10 method's definition.  It's
> lacking.  A legitimate definition of "on the authorization domain name"
> would have clarified a normative reference for what accessing that  over
> TLS means and likely would have included that the SNI needed to be the
> authorization domain name.  As such, it's really just a tenuous land-grab
> that TLS-SNI-01 is compliant with .10.


I do not know why you say that, considering the Forum explicitly decided to
make .10 flexible as it is to accommodate both solutions.

The goal was explicitly NOT to make an ideal-secure solution, it was to
document what is practiced in favor of replacing “any other method”

To that end, it is more useful to point out, “As written, X is permissible,
but not desired, while restricting to Y reduces that risk”. The goal is
honestly less to provide solutions (“I think it should be this”) and more
to provide risk assessments and suggestions. The latter is far more
beneficial for walking folks through the risks and concerns and how to
mitigate.


>
> One of these days I need to sign the IPR waiver and join the cabforum
> mailing list as an interested party.
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 11, 2018 at 2:46 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 11/01/2018 01:08, Ryan Sleevi wrote:
> > On Wed, Jan 10, 2018 at 6:35 PM, Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >>
> >> Agree.
> >>
> >> Hence my suggestion that TLS-SNI-0next use a name under the customer's
> >> domain (such as the name used for DNS-01), not a name under .invalid.
> >
> >
> > I thought it had been pointed out, but that doesn't actually address the
> > vulnerability. It presumes a hierarchal nature of certificate/account
> > management that simply does not align with how the real world and how
> real
> > providers work.
> >
>
> There are TWO related vulnerabilities at work here:
>
> 1. A lot of hosting providers allow users to provision certificates for
>whatever.acme.invalid on SNI capable hosts, even though those users
>are not owners of whatever domain was issued a challenge for the
>number in the "whatever" part.  Other than adding code to specifically
>block "acme.invalid" to every software stack/configuration used by
>hosting providers, this is almost unsolvable at the host provider end,
>thus it may be easier to change the TLS-SNI-xx method in some way.
>
> 2. A much smaller group of hosting providers allow users to set up
>hosting for subdomains of domains already provisioned for other users
>(e.g. user B setting up hosting for whatever.acme.example.com when
>user A is already using the host for example.com).  This case is not
>solved by changing the SNI challenge to be a subdomain of the domain
>being validated.  But since this is a smaller population of hosting
>providers, getting them to at least enforce that the parent domain
>user needs to authorize which other users can host a subdomain with
>them is much more tractable, especially as it has obvious direct
>security benefits outside the ACME context.


This is categorically false. It is, itself, more complex, more error prone,
and more complexity (for example, due to the nature of authorization
domains) and that, at the end of the day, fails to achieve its goals.

The simplest way I can try to get you to think about it is to consider a
cert for foo.bar.example.com being requested by Iser C, and preexisting
domains of www.example.com (User A) and example.com (Iser B). Think about
how that would be “checked” - or even simply who the authorizors should be.

I assure you, it both fails to address the problem (of limiting risk) and
increases the complexity. Put simply, it doesn’t work - so there is no
value in doubling down trying to make it work, especially given that it
also fails to provide a solution for the overall population (like
blacklisting does).

Finally, the assumption there will be fewer of X so it’s easier to fix is,
also, counterintuitively false - the fewer there are and the more baroque
and complex the solution is, the harder it is to make any assumption about
adoption uptake.

(Hosting providers who allow uploading certificates for the specific
> DNS/SNI names of other users are a security problem in itself, as it
> could allow e.g. uploading an untrusted exact domain cert to disrupt
> another user's site having only a wildcard certificate).


Not really. You say this but that is the reality today and can and is
mitigated.

On the other hand, such providers will often (included or at extra fee)
> allow provisioning arbitrary subdomains that are then typically added to
> the HTTP(S) vhost configuration and the hosted DNS configuration, which
> is good enough for TLS-SNI-modified-to-use-subdomain and HTTP-01, but
> won't allow users to respond to the DNS-01 and may or may not allow or
> users to respond to TLS-SNI-01 challenges (the feature allowing
> responding to TLS-SNI-01 challenges is likely to suffer from security
> issue #1).


The problem in your thinking, which I wasn’t clear enough about I suppose,
is that those use cases are already met by other validation means and
there’s no assumption nor need for TLS-SNI, and while you pose your
solution as an improvement, in no way makes it easier or more widespread,
and simply limits what it can do and overlaps with other methods.

In any event, I think if you want to continue to explore that line of
thinking, you’re more than free to within the IETF, where you can learn
more directly about the requirements rather than construct hypothetical
environments.

Just reread RFC7301.  While it does say that servers SHALL reject such
> connections (or at least not send back an ALPN indicating a selected
> value, as if not implementing the extension), I find it likely that some
> combinations of TLS implementation and application implementation will
> blindly accept whatever unknown protocol identifier a client lists as
> the only option.


That is completely unproductive speculative strawmanning that doesn’t allow
for productive dialog. More 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Jakob Bohm via dev-security-policy

On 11/01/2018 01:08, Ryan Sleevi wrote:

On Wed, Jan 10, 2018 at 6:35 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Agree.

Hence my suggestion that TLS-SNI-0next use a name under the customer's
domain (such as the name used for DNS-01), not a name under .invalid.



I thought it had been pointed out, but that doesn't actually address the
vulnerability. It presumes a hierarchal nature of certificate/account
management that simply does not align with how the real world and how real
providers work.



There are TWO related vulnerabilities at work here:

1. A lot of hosting providers allow users to provision certificates for
  whatever.acme.invalid on SNI capable hosts, even though those users
  are not owners of whatever domain was issued a challenge for the
  number in the "whatever" part.  Other than adding code to specifically
  block "acme.invalid" to every software stack/configuration used by
  hosting providers, this is almost unsolvable at the host provider end,
  thus it may be easier to change the TLS-SNI-xx method in some way.

2. A much smaller group of hosting providers allow users to set up
  hosting for subdomains of domains already provisioned for other users
  (e.g. user B setting up hosting for whatever.acme.example.com when
  user A is already using the host for example.com).  This case is not
  solved by changing the SNI challenge to be a subdomain of the domain
  being validated.  But since this is a smaller population of hosting
  providers, getting them to at least enforce that the parent domain
  user needs to authorize which other users can host a subdomain with
  them is much more tractable, especially as it has obvious direct
  security benefits outside the ACME context.

(Hosting providers who allow uploading certificates for the specific
DNS/SNI names of other users are a security problem in itself, as it
could allow e.g. uploading an untrusted exact domain cert to disrupt
another user's site having only a wildcard certificate).

Note that neither issue #1, nor issue #2 involves any kind of DNS
checking or walking, as it is perfectly OK for either or both involved
domains to not point their DNS at the configured server at any given
point in time.  Of cause the CA would use their view of the DNS to 
locate the host that will be probed for the challenge certificate, but 
the actual host need not.


If a popular hosting package such as DirectAdmin suffers from issue #2, 
then that would rule out the subdomain solution.



I can understand why it might seem intuitive - and, I agree, for providers
that create a lock between customer<->domain hierarchy, that might work -
but I would assert that they're not unique. And given that the concern is
precisely about those that *don't* do such bonding, it simply fails as a
solution.

In short, any solution that relies solely on the name will be technically
deficient in the real world, as this issue shows us. So any 'solution' that
proposes to shift the names around is to misunderstand that risk.



Disagree.

In the world of real hosting providers, sometimes users often don't get
to control the DNS of a domain purchased through that hosting provider,
while they might still have the ability to "purchase" (for free from
letsencrypt.org) their own certificates and the ability to configure
simple aspects of their website, such as available files.



If they can't control the DNS (for permission reasons), then they didn't
really purchase the domain. If they can't control the DNS for technical
reasons, then that's a deficiency of the hosting provider, and that doesn't
mean we should weaken the validation methods to accommodate those hosts who
can't invest in infrastructure.


Reality at many providers is like that.  User's typically need to go
through hoops to transfer their domains to a 3rd party DNS hoster that
allows them to change DNS entries, and then the original hosting
provider stops helping them with their "unsupported" configuration,
thereby forcing them to switch to more expensive hosting providers too.

On the other hand, such providers will often (included or at extra fee)
allow provisioning arbitrary subdomains that are then typically added to
the HTTP(S) vhost configuration and the hosted DNS configuration, which
is good enough for TLS-SNI-modified-to-use-subdomain and HTTP-01, but
won't allow users to respond to the DNS-01 and may or may not allow or
users to respond to TLS-SNI-01 challenges (the feature allowing
responding to TLS-SNI-01 challenges is likely to suffer from security
issue #1).

The overall HTTPS-everywhere goal would fail if we restricted ACME to
only "the best" providers running "the most popular" server software in
"the latest version".




But wouldn't the backward compatibility features of TLS itself (and/or

some permissive TLS / https implementations) either ignore ALPN
extensions when they "know" they are only going to serve up HTTP/1.x
(not HTTP/SPDY) or complete the 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matt Palmer via dev-security-policy
On Wed, Jan 10, 2018 at 05:24:41PM +, Gervase Markham via 
dev-security-policy wrote:
> On 10/01/18 17:04, Matthew Hardeman wrote:
> > That seems remarkably deficient.  No other validation mechanism which is
> > accepted by the community relies upon specific preventative behavior by any
> > number of random hosting companies on the internet.
> 
> I don't think that's true. If your hosting provider allows other sites
> to respond to HTTP requests for your domain, there's a similar
> vulnerability in the HTTP-01 checker.

That's quite different, though, from your hosting provider allowing other
sites to respond to SNI requests for some completely other domain that
happens to then authorise certificate issuance for your domain.

> Or, if an email provider allows people to claim any of the special email
> addresses, there's a similar vulnerability in email-based methods.

Yeah, and that's a continuing gift of amusing blog posts ("check out who I
got a certificate for this time!").  I'd hope we'd all have learnt from
that, though, and not be looking to cheer on other validation methods that
suffer from the same problems.  Playing whack-a-mole with hosting providers
to get them to do something that is *only* needed to secure certificate
issuance, and provides zero operational benefit otherwise, seems like a
losing proposition.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incident report: Failure to verify authenticity for some partner requests

2018-01-10 Thread Wayne Thayer via dev-security-policy
Thank you for the report Tim. I just created
https://bugzilla.mozilla.org/show_bug.cgi?id=1429639 to track this issue.
Please follow up in the bug and on this thread.

- Wayne

On Wed, Jan 10, 2018 at 2:24 PM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
>
> Hi everyone,
>
> There was a bug in our OEM integration that led to a lapse in the
> verification of authenticity of some OV certificate requests coming in
> through the reseller/partner system.
>
> As you know, BR 3.2.5 requires CAs to verify the authenticity of a request
> for an OV certificate through a Reliable Method of Communication (RMOC).
> Email can be a RMOC, but in these cases, the email address was a
> constructed
> email address as in BR 3.2.2.4.4.  Despite the fact that these addresses
> are
> standardized in RFC 2142 or elsewhere, we do not believe this meets the
> standard of "verified using a source other than the Applicant
> Representative."
>
> The issue was discovered by TBS Internet on Dec 30, 2018. Apologies for the
> delay in reporting this. Because of the holidays, it took longer than we
> wanted to collect the data we needed.  We patched the system to prevent
> continued use of constructed emails for authenticity verification early,
> but
> getting the number of impacted orgs took a bit more time. We are using the
> lessons learned to implement changes that will benefit overall user
> security
> as we migrate the legacy Symantec practices and systems to DigiCert.
>
> Here's the incident report:
>
> 1.How your CA first became aware of the problem (e.g. via a problem
> report submitted to your Problem Reporting Mechanism, via a discussion in
> mozilla.dev.security.policy, or via a Bugzilla bug), and the date.
>
> Email from JP at TBS about the issue on Dec 30, 2017.
>
> 2.A timeline of the actions your CA took in response.
>
> A. Dec 30, 2017 - Received report that indirect accounts did not require a
> third-party source for authenticity checks. Constructed emails bled from
> the
> domain verification approval list to the authenticity approval list.
> B. Dec 30, 2017 - Investigation began. Shut off email verification of
> authenticity.
> C. Jan 3, 2017 - Call with JP to investigate what he was seeing and
> confirmed that all indirect accounts were potentially impacted.
> D. Jan 3, 2017 - Fixed issue where constructed emails were showing as a
> permitted address for authenticity verification.
> E. Jan 5, 2017 - Invalidated all indirect order's authenticity checks.
> Started calling on verified numbers to confirm authenticity for impacted
> accounts.
> F. Jan 6, 2017 - Narrowed scope to only identify customers impacted (where
> the validation staff used a constructed email rather than a verified
> number).
> G. Jan 10, 2017 - This disclosure.
>
> Ongoing:
> H. Reverification of all impacted accounts
> I. Training of verification staff on permitted authenticity verification
>
> 3.Confirmation that your CA has stopped issuing TLS/SSL certificates
> with the problem.
>
> Confirmed. Email verification of authenticity remains disabled until we can
> ensure additional safeguards.
>
> 4.A summary of the problematic certificates. For each problem: number
> of
> certs, and the date the first and last certs with that problem were issued.
>
> There are 3,437 orgs impacted, with a total of 5,067 certificates.  The
> certificates were issued between December 1st and December 30th.
>
> 5.The complete certificate data for the problematic certificates. The
> recommended way to provide this is to ensure each certificate is logged to
> CT and then list the fingerprints or crt.sh IDs, either in the report or as
> an attached spreadsheet, with one list per distinct problem.
>
> Will add to CT once we grab it all.  I will provide a list of affected
> certificates in a separate email (it's big, so it was getting this post
> moderated).
>
> 6.Explanation about how and why the mistakes were made or bugs
> introduced, and how they avoided detection until now.
>
> In truth, it comes down to a short timeframe to implement the
> Symantec-DigiCert system integration and properly train everyone we hired.
> We are implementing lessons learned to correct this and improve security
> overall as we migrate legacy Symantec practices and systems to DigiCert. In
> this case, there are mitigating controls.  For example, these are mostly
> existing Symantec certs that are migrating to the DigiCert backend. The
> verification by Symantec previously means that the number of potentially
> problematic certs is pretty low. There's also a mitigating factor that we
> did not use method 1 to confirm domain control. In each case, someone from
> the approved constructed emails had to sign off on the certificate before
> issuance.  This is limited to OV certificates, meaning EV certificates were
> not impacted. Despite the mitigating factors, we believe this is a
> compliance issue, even though we believe the 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wednesday, January 10, 2018 at 6:17:34 PM UTC-6, Ryan Sleevi wrote:
> On Wed, Jan 10, 2018 at 5:53 PM, Matthew Hardeman 
> wrote:
> >
> > That, indeed, is a chilling picture.  I'd like to think the community's
> > response to any such stretch of the rules would be along the lines of "Of
> > course, you're entirely correct.  Technically this was permitted.  Oh, by
> > the way, we're pulling your roots, we've decided you're too clever to be
> > trusted."
> >
> 
> GlobalSign proposed this as a new method -
> https://cabforum.org/pipermail/validation/2017-May/000553.html
> Amazon pointed out that .10 already permitted this -
> https://cabforum.org/pipermail/validation/2017-May/000557.html
> 
> Your reaction means you must be one of the "worrywarts who treat
> certificate owners like criminals" though, in the words of Steve Medin of
> Symantec/Digicert -
> https://cabforum.org/pipermail/validation/2017-May/000554.html , who was
> also excited because of the 'brand stickiness' it would create (the term
> typically used to refer to the likelihood or difficulty for someone to
> switch to another, potentially more competent CA - in this case, due to the
> ease of the lower security)

Wow.  The economic incentives for behaving badly clearly were at work in those.

I think I am one of those worrywarts, in fact.

Also, I just reread and contemplated the .10 method's definition.  It's 
lacking.  A legitimate definition of "on the authorization domain name" would 
have clarified a normative reference for what accessing that  over TLS means 
and likely would have included that the SNI needed to be the authorization 
domain name.  As such, it's really just a tenuous land-grab that TLS-SNI-01 is 
compliant with .10.

One of these days I need to sign the IPR waiver and join the cabforum mailing 
list as an interested party.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jan 10, 2018 at 5:53 PM, Matthew Hardeman 
wrote:

> For comparison of "What could be worse", you could imagine a CA using the
>> .10 method to assert the Random Value (which, unlike .7, is not bounded in
>> its validity) is expressed via the serial number. In this case, a CA could
>> validate a request and issue a certificate. Then, every 3 years (or 2 years
>> starting later this year), connect to the host, see that it's serving their
>> previously issued certificate, assert that the "Serial Number" constitutes
>> the Random Value, and perform no other authorization checks beyond that. In
>> a sense, fully removing any reasonable assertion that the domain holder has
>> authorized (by proof of acceptance) the issuance.
>>
>
> That, indeed, is a chilling picture.  I'd like to think the community's
> response to any such stretch of the rules would be along the lines of "Of
> course, you're entirely correct.  Technically this was permitted.  Oh, by
> the way, we're pulling your roots, we've decided you're too clever to be
> trusted."
>

GlobalSign proposed this as a new method -
https://cabforum.org/pipermail/validation/2017-May/000553.html
Amazon pointed out that .10 already permitted this -
https://cabforum.org/pipermail/validation/2017-May/000557.html

Your reaction means you must be one of the "worrywarts who treat
certificate owners like criminals" though, in the words of Steve Medin of
Symantec/Digicert -
https://cabforum.org/pipermail/validation/2017-May/000554.html , who was
also excited because of the 'brand stickiness' it would create (the term
typically used to refer to the likelihood or difficulty for someone to
switch to another, potentially more competent CA - in this case, due to the
ease of the lower security)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jan 10, 2018 at 6:35 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> Agree.
>
> Hence my suggestion that TLS-SNI-0next use a name under the customer's
> domain (such as the name used for DNS-01), not a name under .invalid.


I thought it had been pointed out, but that doesn't actually address the
vulnerability. It presumes a hierarchal nature of certificate/account
management that simply does not align with how the real world and how real
providers work.

I can understand why it might seem intuitive - and, I agree, for providers
that create a lock between customer<->domain hierarchy, that might work -
but I would assert that they're not unique. And given that the concern is
precisely about those that *don't* do such bonding, it simply fails as a
solution.

In short, any solution that relies solely on the name will be technically
deficient in the real world, as this issue shows us. So any 'solution' that
proposes to shift the names around is to misunderstand that risk.


> Disagree.
>
> In the world of real hosting providers, sometimes users often don't get
> to control the DNS of a domain purchased through that hosting provider,
> while they might still have the ability to "purchase" (for free from
> letsencrypt.org) their own certificates and the ability to configure
> simple aspects of their website, such as available files.
>

If they can't control the DNS (for permission reasons), then they didn't
really purchase the domain. If they can't control the DNS for technical
reasons, then that's a deficiency of the hosting provider, and that doesn't
mean we should weaken the validation methods to accommodate those hosts who
can't invest in infrastructure.

But wouldn't the backward compatibility features of TLS itself (and/or
> some permissive TLS / https implementations) either ignore ALPN
> extensions when they "know" they are only going to serve up HTTP/1.x
> (not HTTP/SPDY) or complete the TLS handshake before deciding that they
> don't have an "acme" service to connect to?
>

No. You've misunderstood how ALPN works then.


> And even if it wasn't so, most sites that do "control" the whole stack,
> and run on their own dedicated machine and IP probably lack the ability
> and/or patience to modify the https code in "their" web server.
>

Do you believe people are bespoke minting these ACME challenges on the fly?
Because that's not how it's working in the real world - it's being based on
tooling, generally directly integrated in the server to automatically
enroll, manage, and renew (and indeed, that is explicitly what is
recommended as the ACME integration). In such a model - i.e. how it works
today - that lack of ability/patience is a non-issue, because it's simply
handled by the ACME client without any additional work by the server
operator - the same as it is today.


> This may come back to the unfortunate use of BR language to redefine the
> plain word "misissuance".
>

You can blame the BRs, but this is really simply a notion of the language
of PKI, and this is not a new debate by any means, so probably not worth
haggling about here, in as much as the nuance doesn't alter the
conclusions, and the point stands that "The CA met their obligations, but
the undesirable result happened". The solution for that is to fix the
requirements to prevent undesirable results. If "The CA didn't meet their
obligations", well, that's a different conversation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Jakob Bohm via dev-security-policy

On 10/01/2018 18:39, Matthew Hardeman wrote:


Here again, I think we have a problem.  It's regarded as normal and
acceptable at many web host infrastructures to pre-stage sites for
domain-labels not yet in use to allow for development and test deployment.
Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
direct the "dev and test browser" to the right infrastructure for the
pending label.  It will be an uphill battle to get arbitrary web hosts to
implement any one of the mitigations you've set out.  Especially when it
reduces a functionality some of their clients like and doesn't seem to get
them any tangible benefit.



Another common use of setting up web hosting for a label before pointing
it there is to simply keep an existing site running on another host
until the new one is fully configured and validated, then switching over
the DNS to the new server (with the usual DNS-caching overlap in time).

A specific use of hosting ".invalid" domain labels is to temporarily
disable a site that is in some state of maintenance/construction/etc.
with an intent to switch to a valid label later, especially if the
intended valid label is currently pointing to another vhost on the same
host.

Thus preventing setup for previously unknown domain labels that don't
point to a host is the normal situation whenever customers move to that
host.  And with all the new TLDs allowed by ICANN, it is no longer
practical or reliable for providers to keep whitelists and blacklists of
hostable TLDs.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Jakob Bohm via dev-security-policy

On 10/01/2018 23:53, Matthew Hardeman wrote:

On Wed, Jan 10, 2018 at 3:57 PM, Ryan Sleevi  wrote:




Note that the presumptive discussion re: .well-known has ignored that the
Host header requirements are underspecified, thus the fundamental issue
still exists for that too. That said, there absolutely has been both
tension regarding and concern over the use of file-based or
certificate-based proofs of control, rather than DNS-based proofs. This is
a complex tradeoff though - unquestionably, the ability to use the
certificate-based proof has greatly expanded the ease in which to get a
certificate, and for the vast majority of those certificates, this is not
at all a security issue.



As you note, http-01, as strictly specified has some weaknesses.  That Host
header requirement should be shored up.  Redirect chasing, if any, should
be shored up.  Etc, etc.  I do believe that LE's implementation largely
hedges the major vulnerability.  What vulnerability remains in the mix
requires that a web host literally fail in their duty to protect what
resources may be served up quite literally in the same named label as is
being validated.  The difference, from a web host's perspective, in that
duty versus the duty we would like to impose upon them in TLS-SNI-01 is
that it is commonly expected that the web host will take responsibility for
ensuring that only that customer of theirs paying them for www.example.com
will be able to publish content at www.example.com.  Additionally, all of
the community, the customer, and the web host can all intellectually
understand, without a great deal of complex thought, how that
responsibility for a resource under the correct domain label must be
controlled by the customer.  What's less clear to all, I should think, is
why the web host has a duty not to serve some resource under a totally
unrelated name like rofl.blah.acme.invalid in defense of his customer
www.example.com.



Agree.

Hence my suggestion that TLS-SNI-0next use a name under the customer's
domain (such as the name used for DNS-01), not a name under .invalid.


Ultimately, as you suggest, I wonder if the [hehehe] "shocking" conclusion
of all of this is that, perhaps, if we seek to demonstrate meaningful
control of a domain or DNS label, the proper way to do so is by requiring
specific manipulation of only the DNS infrastructure, as, for example, in
dns-01?  DNS infrastructure and its behavior are literally in scope of
demonstration of meaningful control of a domain label.  Any behavior on
part of any web host really technically isn't.  I do understand the reasons
it's presently allowed that non-DNS mechanisms be used.



Disagree.

In the world of real hosting providers, sometimes users often don't get
to control the DNS of a domain purchased through that hosting provider,
while they might still have the ability to "purchase" (for free from
letsencrypt.org) their own certificates and the ability to configure
simple aspects of their website, such as available files.






For comparison of "What could be worse", you could imagine a CA using the
.10 method to assert the Random Value (which, unlike .7, is not bounded in
its validity) is expressed via the serial number. In this case, a CA could
validate a request and issue a certificate. Then, every 3 years (or 2 years
starting later this year), connect to the host, see that it's serving their
previously issued certificate, assert that the "Serial Number" constitutes
the Random Value, and perform no other authorization checks beyond that. In
a sense, fully removing any reasonable assertion that the domain holder has
authorized (by proof of acceptance) the issuance.



That, indeed, is a chilling picture.  I'd like to think the community's
response to any such stretch of the rules would be along the lines of "Of
course, you're entirely correct.  Technically this was permitted.  Oh, by
the way, we're pulling your roots, we've decided you're too clever to be
trusted."






That being the case, I would recommend that the proper change to the
TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
of those mechanisms.



I'm not sure I agree that haste is advisable or desirable, but I'm still
evaluating. At the core, we're debating whether something should be opt-out
by default (which blacklisting .invalid is essentially doing) or opt-in. An
opt-in mechanism cannot be signaled in-band within the certificate, but may
be signalable in-band to the TLS termination, such as via a TLS extension
or via the use of an ALPN protocol identifier (such as "acme").



The TLS extension or ALPN protocol seem feasible to secure, though
obviously there's a lot of infrastructure change and deployment to get
there.



But wouldn't the backward compatibility features of TLS itself (and/or
some permissive TLS / https implementations) either ignore ALPN
extensions when they "know" they are only going to serve up HTTP/1.x
(not HTTP/SPDY) or complete the TLS 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
You've just triggered me with an early 2000s flashback.

Now I can't get that "So fresh and so clean, clean..." rap line out of my
head from OutKast's "So Fresh, So Clean".

On Wed, Jan 10, 2018 at 4:11 PM, Tim Hollebeek 
wrote:

>
>
> My "Freshness Value" ballot should fix this, by requiring that Freshness
> Values actually be fresh.
>
> -Tim
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jan 10, 2018 at 3:57 PM, Ryan Sleevi  wrote:

>
>
> Note that the presumptive discussion re: .well-known has ignored that the
> Host header requirements are underspecified, thus the fundamental issue
> still exists for that too. That said, there absolutely has been both
> tension regarding and concern over the use of file-based or
> certificate-based proofs of control, rather than DNS-based proofs. This is
> a complex tradeoff though - unquestionably, the ability to use the
> certificate-based proof has greatly expanded the ease in which to get a
> certificate, and for the vast majority of those certificates, this is not
> at all a security issue.
>
>
As you note, http-01, as strictly specified has some weaknesses.  That Host
header requirement should be shored up.  Redirect chasing, if any, should
be shored up.  Etc, etc.  I do believe that LE's implementation largely
hedges the major vulnerability.  What vulnerability remains in the mix
requires that a web host literally fail in their duty to protect what
resources may be served up quite literally in the same named label as is
being validated.  The difference, from a web host's perspective, in that
duty versus the duty we would like to impose upon them in TLS-SNI-01 is
that it is commonly expected that the web host will take responsibility for
ensuring that only that customer of theirs paying them for www.example.com
will be able to publish content at www.example.com.  Additionally, all of
the community, the customer, and the web host can all intellectually
understand, without a great deal of complex thought, how that
responsibility for a resource under the correct domain label must be
controlled by the customer.  What's less clear to all, I should think, is
why the web host has a duty not to serve some resource under a totally
unrelated name like rofl.blah.acme.invalid in defense of his customer
www.example.com.

Ultimately, as you suggest, I wonder if the [hehehe] "shocking" conclusion
of all of this is that, perhaps, if we seek to demonstrate meaningful
control of a domain or DNS label, the proper way to do so is by requiring
specific manipulation of only the DNS infrastructure, as, for example, in
dns-01?  DNS infrastructure and its behavior are literally in scope of
demonstration of meaningful control of a domain label.  Any behavior on
part of any web host really technically isn't.  I do understand the reasons
it's presently allowed that non-DNS mechanisms be used.


>
> For comparison of "What could be worse", you could imagine a CA using the
> .10 method to assert the Random Value (which, unlike .7, is not bounded in
> its validity) is expressed via the serial number. In this case, a CA could
> validate a request and issue a certificate. Then, every 3 years (or 2 years
> starting later this year), connect to the host, see that it's serving their
> previously issued certificate, assert that the "Serial Number" constitutes
> the Random Value, and perform no other authorization checks beyond that. In
> a sense, fully removing any reasonable assertion that the domain holder has
> authorized (by proof of acceptance) the issuance.
>

That, indeed, is a chilling picture.  I'd like to think the community's
response to any such stretch of the rules would be along the lines of "Of
course, you're entirely correct.  Technically this was permitted.  Oh, by
the way, we're pulling your roots, we've decided you're too clever to be
trusted."


>
>
>> That being the case, I would recommend that the proper change to the
>> TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
>> of those mechanisms.
>>
>
> I'm not sure I agree that haste is advisable or desirable, but I'm still
> evaluating. At the core, we're debating whether something should be opt-out
> by default (which blacklisting .invalid is essentially doing) or opt-in. An
> opt-in mechanism cannot be signaled in-band within the certificate, but may
> be signalable in-band to the TLS termination, such as via a TLS extension
> or via the use of an ALPN protocol identifier (such as "acme").
>
>
The TLS extension or ALPN protocol seem feasible to secure, though
obviously there's a lot of infrastructure change and deployment to get
there.


>
> As long as the web hosting infrastructure does not automatically create
>> new contexts for heretofore never seen labels, it won't be possible to
>> fully validate in an automated fashion whether or not a given hosting
>> infrastructure would or would not allow any random customer to create some
>> blah.blah.acme.invalid label and bind it to a certificate that said random
>> customer controls.  Because of the various incentives and motivations, it
>> seems almost inevitable that it would eventually occur.  When a
>> mis-issuance arises resulting from that scenario, I wonder how the
>> community would view that?
>>
>
> I'm not sure I'd classify it as misissuance, no more than those who were
> able to get certificates by 

RE: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Tim Hollebeek via dev-security-policy

> For comparison of "What could be worse", you could imagine a CA using the
> .10 method to assert the Random Value (which, unlike .7, is not bounded in
its
> validity) is expressed via the serial number. In this case, a CA could
validate a
> request and issue a certificate. Then, every 3 years (or 2 years starting
later this
> year), connect to the host, see that it's serving their previously issued
> certificate, assert that the "Serial Number" constitutes the Random Value,
and
> perform no other authorization checks beyond that. In a sense, fully
removing
> any reasonable assertion that the domain holder has authorized (by proof
of
> acceptance) the issuance.

My "Freshness Value" ballot should fix this, by requiring that Freshness
Values actually be fresh.

-Tim



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jan 10, 2018 at 4:37 PM, Matthew Hardeman 
wrote:
>
> In the exact text above, what I meant by "create the proper zone in
> .acme.invalid" was to create that web hosting context (or actually set of
> web hosting contexts) and bind to the Host names that are the
> z(i)[0...32].z(i)[33...64].acme.invalid labels that the attacker knows to
> be the set of those which may arrive in the TLS SNI name values for the
> validation calls from the CA to the TLS infrastructure.  I clarify again
> that I'm not speaking of any real DNS mapping at all.  I'm speaking of a
> mapping between a received TLS SNI label to a web hosting context on the
> hosting infrastructure.
>

Got it. Yes, a large number of web hosting providers allow for potentially
binding names not yet bound to DNS.

This becomes an issue iff they share the same IP (which is a far more
varied story) and they allow control over the SNI<->certificate mapping
(which is also far more variable). So the lack of a binding to a 'real'
name in and of itself is not an issue, merely the confluence of things.


> If this is the case, I can only conclude that all presently proposed
> enhancements to TLS-SNI-01 and TLS-SNI-02 validation, including my own
> rough sketch recommendations, are deficient for improvement of security and
> all of these TLS-SNI validation mechanisms are materially less secure and
> less useful than the other ACME methods that Let's Encrypt presently
> implements.
>

Note that the presumptive discussion re: .well-known has ignored that the
Host header requirements are underspecified, thus the fundamental issue
still exists for that too. That said, there absolutely has been both
tension regarding and concern over the use of file-based or
certificate-based proofs of control, rather than DNS-based proofs. This is
a complex tradeoff though - unquestionably, the ability to use the
certificate-based proof has greatly expanded the ease in which to get a
certificate, and for the vast majority of those certificates, this is not
at all a security issue.

I think the apt comparison is about introducing a 'new' reserved e-mail
address, in addition to the ones already in the Baseline Requirements. The
conversation being held now is a natural consequence of removing the 'any
other' method and performing more rigorous examination of the application
in practice.

For comparison of "What could be worse", you could imagine a CA using the
.10 method to assert the Random Value (which, unlike .7, is not bounded in
its validity) is expressed via the serial number. In this case, a CA could
validate a request and issue a certificate. Then, every 3 years (or 2 years
starting later this year), connect to the host, see that it's serving their
previously issued certificate, assert that the "Serial Number" constitutes
the Random Value, and perform no other authorization checks beyond that. In
a sense, fully removing any reasonable assertion that the domain holder has
authorized (by proof of acceptance) the issuance.


> All the recommendations and guidance in the world is unlikely to timely
> change the various (and there are so many) hosting providers' behavior with
> regards to allowing creating of web hosting contexts for labels like
> "*.*.acme.invalid".  The CAs are beholden to the CABforum and root
> programs.  The various web hosts are not.
>

Agreed; although, pragmatically, I hope that the visibility of the issue,
and the excellent documentation provided by Let's Encrypt, may allow us the
opportunity to provide a graceful transition into a more robust
implementation and a more restrictive version of .10 over the coming months.


> That being the case, I would recommend that the proper change to the
> TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
> of those mechanisms.
>

I'm not sure I agree that haste is advisable or desirable, but I'm still
evaluating. At the core, we're debating whether something should be opt-out
by default (which blacklisting .invalid is essentially doing) or opt-in. An
opt-in mechanism cannot be signaled in-band within the certificate, but may
be signalable in-band to the TLS termination, such as via a TLS extension
or via the use of an ALPN protocol identifier (such as "acme").

End-users (e.g. those who are not cloud) with full-stack control of their
TLS termination can 'simply' add the "acme" ALPN advertisement to signal
their configuration.
Cloud providers that provide a degree of segmentation and isolation can
similarly allow the "acme" ALPN protocol to be negotiated, and complete the
enrollment (either themselves, as some providers do, or allowing their
customers to do so)
Providers in that proverbial 'long tail' that don't update to explicitly
advertise the TLS extension or ALPN identifier (or equivalent TLS handshake
signal) would otherwise fail the ACME challenge, since it wouldn't be clear
that it was safe to do so.

As long as the web hosting infrastructure does not 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
As another tangent question on the advisability of resuming the TLS-SNI-01
validation method, can/will Let's Encrypt share any data on prevalence of
the various validation mechanisms over time and how they stack up against
each other in terms of prevalence.  Also, it might be helpful to know
attempted versus completed successfully.

I wonder how big a problem it is if all of the TLS-SNI-01/02 mechanisms go
away?

On Wed, Jan 10, 2018 at 3:45 PM, Matthew Hardeman 
wrote:

> I agree with Nick's questions, and I can certainly see the relevance in
> matching what actually happens out there to the effectiveness and
> appropriateness of the various domain validation mechanisms.
>
> Having said that, I think it should effectively be a "read only" affair,
> shaping community and CA response to the conditions that exist rather than
> striving for better conditions.  I think it would be impractical to assume
> that the community can persuade the entire web hosting industry to effect
> meaningful universal change in a relevantly short time frame.
>
> On Wed, Jan 10, 2018 at 3:05 PM, Nick Lamb via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Wed, 10 Jan 2018 15:10:41 +0100
>> Patrick Figel via dev-security-policy
>>  wrote:
>>
>> > A user on Hacker News brought up the possibility that the fairly
>> > popular DirectAdmin control panel might also demonstrate the
>> > problematic behaviour mentioned in your report[1].
>>
>> Although arguably tangential to the purpose of m.d.s.policy, I think it
>> would be really valuable to understand what behaviours are actually out
>> there and in what sort of volumes.
>>
>> I know from personal experience that my own popular host lets me create
>> web hosting for a 2LD I don't actually control. I had management
>> agreement to take control, began setting up the web site and then
>> technical inertia meant control over the name was never actually
>> transferred, the site is still there but obviously in that case needs
>> an /etc/hosts override to visit from a normal web browser.
>>
>> Would that host:
>>
>> * Let me do this even if another of their customers was hosting that
>>   exact site ? If so, would mine sometimes "win" over theirs, perhaps if
>>   they temporarily disabled access or due to some third criteria like
>>   our usernames or seniority of account age ?
>>
>> * Let me do this for sub-domains or sub-sub-domains of other customers,
>>   including perhaps ones which have a wildcard DNS entry so that "my"
>>   site would actually get served to ordinary users ?
>>
>> * Let me do this for DNS names that can't exist (like *.acme.invalid,
>>   leading to the Let's Encrypt issue we started discussing) ?
>>
>>
>> I don't know the answer to any of those questions, but I think that
>> even if they're tangential to m.d.s.policy somebody needs to find out,
>> and not just for the company I happen to use.
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
I agree with Nick's questions, and I can certainly see the relevance in
matching what actually happens out there to the effectiveness and
appropriateness of the various domain validation mechanisms.

Having said that, I think it should effectively be a "read only" affair,
shaping community and CA response to the conditions that exist rather than
striving for better conditions.  I think it would be impractical to assume
that the community can persuade the entire web hosting industry to effect
meaningful universal change in a relevantly short time frame.

On Wed, Jan 10, 2018 at 3:05 PM, Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wed, 10 Jan 2018 15:10:41 +0100
> Patrick Figel via dev-security-policy
>  wrote:
>
> > A user on Hacker News brought up the possibility that the fairly
> > popular DirectAdmin control panel might also demonstrate the
> > problematic behaviour mentioned in your report[1].
>
> Although arguably tangential to the purpose of m.d.s.policy, I think it
> would be really valuable to understand what behaviours are actually out
> there and in what sort of volumes.
>
> I know from personal experience that my own popular host lets me create
> web hosting for a 2LD I don't actually control. I had management
> agreement to take control, began setting up the web site and then
> technical inertia meant control over the name was never actually
> transferred, the site is still there but obviously in that case needs
> an /etc/hosts override to visit from a normal web browser.
>
> Would that host:
>
> * Let me do this even if another of their customers was hosting that
>   exact site ? If so, would mine sometimes "win" over theirs, perhaps if
>   they temporarily disabled access or due to some third criteria like
>   our usernames or seniority of account age ?
>
> * Let me do this for sub-domains or sub-sub-domains of other customers,
>   including perhaps ones which have a wildcard DNS entry so that "my"
>   site would actually get served to ordinary users ?
>
> * Let me do this for DNS names that can't exist (like *.acme.invalid,
>   leading to the Let's Encrypt issue we started discussing) ?
>
>
> I don't know the answer to any of those questions, but I think that
> even if they're tangential to m.d.s.policy somebody needs to find out,
> and not just for the company I happen to use.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jan 10, 2018 at 2:38 PM, Ryan Sleevi  wrote:


>
> I think it's important to point out that these levels of technical
> discussions are best directed to the IETF ACME WG, under the auspices of
> the IETF NoteWell - https://datatracker.ietf.org/wg/acme/about/
>

Noted.  If you think there's potentially merit in the modifications I've
rough sketched here, please indicate as much and I will consider attempting
to pursue as directed.


>
>
>> To the extent that this is true, I harbor significant concern that
>> TLS-SNI-01 could responsibly return to use.
>>
>> I also see a possibility that the mitigations in TLS-SNI-02 may be
>> ineffective in this case.  TLS-SNI-02 would prevent naive and automatic
>> accidental success of validations by some infrastructure, but an attacker
>> who can still create the proper zone in .acme.invalid and upload a custom
>> certificate to be served for this zone would still be able to succeed at
>> validation.
>>
>
> Can you explain what you mean by 'create a proper zone'? .invalid is an
> explicitly reserved TLD.
>

I apologize.  I realized almost immediately on posting that message that I
had erred significantly in oversubscribing that word.  My prior messages's
"zone" would have been better written as "web hosting context", roughly
defined as that embodiment of configuration and resources which would
define a hosting architecture's responses to HTTP requests directed to the
infrastructure with one of a number of
configured-and-bound-to-the-web-hosting-context domain labels in the Host:
header, and for TLS connections those TLS connections reaching the hosting
infrastructure with a TLS SNI name of that same set of
configured-and-bound-to-the-web-hosting-context domain labels.

You correctly point out at .invalid is a reserved TLD.  I imagine there are
a great many hosting infrastructures which allow creating such a new web
hosting context and then binding
not-yet-used-elsewhere-on-this-infrastructure DNS labels prospectively,
without any kind of actual DNS validation.  More importantly, I need not
imagine it.  As Patrick Figel pointed out, it is confirmed that
DirectAdmin, a software infrastructure for hosting with some not
insignificant number of deployments appears to do just so.

I imagine that Mr. Thayer can add more to the conversation regarding the
reasons and level of market penetration and practice, but it does appear
that many shared hosting infrastructures will allow creating new
configurations without having yet pointed the matching real DNS entries to
the infrastructure.  There are pre-staging, development, testing, etc, etc,
reasons that some of the customers out there seem to want that.  Obviously,
there are better ways to handle that, and yet...  In so far as others have
already found examples, it is a market reality, unless there is compelling
evidence to the contrary.

In the exact text above, what I meant by "create the proper zone in
.acme.invalid" was to create that web hosting context (or actually set of
web hosting contexts) and bind to the Host names that are the
z(i)[0...32].z(i)[33...64].acme.invalid labels that the attacker knows to
be the set of those which may arrive in the TLS SNI name values for the
validation calls from the CA to the TLS infrastructure.  I clarify again
that I'm not speaking of any real DNS mapping at all.  I'm speaking of a
mapping between a received TLS SNI label to a web hosting context on the
hosting infrastructure.


>
>
>> However, even that plan only actually gains security if the hosting
>> infrastructure would generally apply protection for heretofore unknown
>> names which are children of existing boarded named on another customer's
>> account.  In other words, how likely is it that if I have a login at some
>> hosting company, and I have boarded on my account a hosting zone that
>> includes the labels www.example.com and example.com that a totally
>> separate
>> login would be allowed to prospectively create a zone called
>> notreallyexample.example.com?  If that's likely or even non-rare, there's
>> still a problem with the mechanism.
>>
>>
> It is likely and non-rare (infact, quite common as it turns out). There
> are very few that match domain authorizations in some way. Note that this
> is further 'difficult' because it would also require cloud providers be
> aware of the tree-walking notion of authorization domain name.
>
> So I don't think this buys any improvement over the status quo, and
> actually makes it considerably more complex and failure prone, due to the
> cross-sectional lookups, versus the fact that .invalid is a reserved TLD.
>

If this is the case, I can only conclude that all presently proposed
enhancements to TLS-SNI-01 and TLS-SNI-02 validation, including my own
rough sketch recommendations, are deficient for improvement of security and
all of these TLS-SNI validation mechanisms are materially less secure and
less useful than the other ACME methods that Let's Encrypt 

Incident report: Failure to verify authenticity for some partner requests

2018-01-10 Thread Tim Hollebeek via dev-security-policy
 

Hi everyone, 
 
There was a bug in our OEM integration that led to a lapse in the
verification of authenticity of some OV certificate requests coming in
through the reseller/partner system.
 
As you know, BR 3.2.5 requires CAs to verify the authenticity of a request
for an OV certificate through a Reliable Method of Communication (RMOC).
Email can be a RMOC, but in these cases, the email address was a constructed
email address as in BR 3.2.2.4.4.  Despite the fact that these addresses are
standardized in RFC 2142 or elsewhere, we do not believe this meets the
standard of "verified using a source other than the Applicant
Representative."
 
The issue was discovered by TBS Internet on Dec 30, 2018. Apologies for the
delay in reporting this. Because of the holidays, it took longer than we
wanted to collect the data we needed.  We patched the system to prevent
continued use of constructed emails for authenticity verification early, but
getting the number of impacted orgs took a bit more time. We are using the
lessons learned to implement changes that will benefit overall user security
as we migrate the legacy Symantec practices and systems to DigiCert.   
 
Here's the incident report:
 
1.How your CA first became aware of the problem (e.g. via a problem
report submitted to your Problem Reporting Mechanism, via a discussion in
mozilla.dev.security.policy, or via a Bugzilla bug), and the date. 
 
Email from JP at TBS about the issue on Dec 30, 2017.  
 
2.A timeline of the actions your CA took in response. 
 
A. Dec 30, 2017 - Received report that indirect accounts did not require a
third-party source for authenticity checks. Constructed emails bled from the
domain verification approval list to the authenticity approval list. 
B. Dec 30, 2017 - Investigation began. Shut off email verification of
authenticity.
C. Jan 3, 2017 - Call with JP to investigate what he was seeing and
confirmed that all indirect accounts were potentially impacted.
D. Jan 3, 2017 - Fixed issue where constructed emails were showing as a
permitted address for authenticity verification.
E. Jan 5, 2017 - Invalidated all indirect order's authenticity checks.
Started calling on verified numbers to confirm authenticity for impacted
accounts. 
F. Jan 6, 2017 - Narrowed scope to only identify customers impacted (where
the validation staff used a constructed email rather than a verified
number).
G. Jan 10, 2017 - This disclosure.
 
Ongoing: 
H. Reverification of all impacted accounts
I. Training of verification staff on permitted authenticity verification
 
3.Confirmation that your CA has stopped issuing TLS/SSL certificates
with the problem. 
 
Confirmed. Email verification of authenticity remains disabled until we can
ensure additional safeguards.
 
4.A summary of the problematic certificates. For each problem: number of
certs, and the date the first and last certs with that problem were issued. 
 
There are 3,437 orgs impacted, with a total of 5,067 certificates.  The
certificates were issued between December 1st and December 30th.
 
5.The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem. 
 
Will add to CT once we grab it all.  I will provide a list of affected
certificates in a separate email (it's big, so it was getting this post
moderated).
 
6.Explanation about how and why the mistakes were made or bugs
introduced, and how they avoided detection until now. 
 
In truth, it comes down to a short timeframe to implement the
Symantec-DigiCert system integration and properly train everyone we hired.
We are implementing lessons learned to correct this and improve security
overall as we migrate legacy Symantec practices and systems to DigiCert. In
this case, there are mitigating controls.  For example, these are mostly
existing Symantec certs that are migrating to the DigiCert backend. The
verification by Symantec previously means that the number of potentially
problematic certs is pretty low. There's also a mitigating factor that we
did not use method 1 to confirm domain control. In each case, someone from
the approved constructed emails had to sign off on the certificate before
issuance.  This is limited to OV certificates, meaning EV certificates were
not impacted. Despite the mitigating factors, we believe this is a
compliance issue, even though we believe the security risk is minimal.
 
7.List of steps your CA is taking to resolve the situation and ensure
such issuance will not be repeated in the future, accompanied with a
timeline of when your CA expects to accomplish these things. 
 
A. We clarified in the system what is required for an authenticity check. 
B. We removed email verification for authenticity checks until appropriate
new safeguards can be added.
C. We are re-validating 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Nick Lamb via dev-security-policy
On Wed, 10 Jan 2018 15:10:41 +0100
Patrick Figel via dev-security-policy
 wrote:

> A user on Hacker News brought up the possibility that the fairly
> popular DirectAdmin control panel might also demonstrate the
> problematic behaviour mentioned in your report[1].

Although arguably tangential to the purpose of m.d.s.policy, I think it
would be really valuable to understand what behaviours are actually out
there and in what sort of volumes.

I know from personal experience that my own popular host lets me create
web hosting for a 2LD I don't actually control. I had management
agreement to take control, began setting up the web site and then
technical inertia meant control over the name was never actually
transferred, the site is still there but obviously in that case needs
an /etc/hosts override to visit from a normal web browser.

Would that host:

* Let me do this even if another of their customers was hosting that
  exact site ? If so, would mine sometimes "win" over theirs, perhaps if
  they temporarily disabled access or due to some third criteria like
  our usernames or seniority of account age ?

* Let me do this for sub-domains or sub-sub-domains of other customers,
  including perhaps ones which have a wildcard DNS entry so that "my"
  site would actually get served to ordinary users ?

* Let me do this for DNS names that can't exist (like *.acme.invalid,
  leading to the Let's Encrypt issue we started discussing) ?


I don't know the answer to any of those questions, but I think that
even if they're tangential to m.d.s.policy somebody needs to find out,
and not just for the company I happen to use.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Santhan Raj via dev-security-policy
On Wednesday, January 10, 2018 at 1:33:31 AM UTC-8, jo...@letsencrypt.org wrote:
> At approximately 5 p.m. Pacific time on January 9, 2018, we received a report 
> from Frans Rosén of Detectify outlining a method of exploiting some shared 
> hosting infrastructures to obtain certificates for domains he did not 
> control, by making use of the ACME TLS-SNI-01 challenge type. We quickly 
> confirmed the issue and mitigated it by entirely disabling TLS-SNI-01 
> validation in Let’s Encrypt. We’re grateful to Frans for finding this issue 
> and reporting it to us.
> 
> We’d like to describe the issue and our plans for possibly re-enabling 
> TLS-SNI-01 support.
> 
> Problem Summary
> 
> In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA) 
> validates a domain name by generating a random token and communicating it to 
> the ACME client. The ACME client uses that token to create a self-signed 
> certificate with a specific, invalid hostname (for example, 
> 773c7d.13445a.acme.invalid), and configures the web server on the domain name 
> being validated to serve that certificate. The ACME server then looks up the 
> domain name’s IP address, initiates a TLS connection, and sends the specific 
> .acme.invalid hostname in the SNI extension. If the response is a self-signed 
> certificate containing that hostname, the ACME client is considered to be in 
> control of the domain name, and will be allowed to issue certificates for it.
> 
> However, Frans noticed that at least two large hosting providers combine two 
> properties that together violate the assumptions behind TLS-SNI:
> 
> * Many users are hosted on the same IP address, and
> * Users have the ability to upload certificates for arbitrary names without 
> proving domain control.
> 
> When both are true of a hosting provider, an attack is possible. Suppose 
> example.com’s DNS is pointed at the same shared hosting IP address as a site 
> controlled by the attacker. The attacker can run an ACME client to get a 
> TLS-SNI-01 challenge, then install their .acme.invalid certificate on the 
> hosting provider. When the ACME server looks up example.com, it will connect 
> to the hosting provider’s IP address and use SNI to request the .acme.invalid 
> hostname. The hosting provider will serve the certificate uploaded by the 
> attacker. The ACME server will then consider the attacker’s ACME client 
> authorized to issue certificates for example.com, and be willing to issue a 
> certificate for example.com even though the attacker doesn’t actually control 
> it.
> 
> This issue only affects domain names that use hosting providers with the 
> above combination of properties. It is independent of whether the hosting 
> provider itself acts as an ACME client.
> 
> Our Plans
> 
> Shortly after the issue was reported, we disabled TLS-SNI-01 in Let’s 
> Encrypt. However, a large number of people and organizations use the 
> TLS-SNI-01 challenge type to get certificates. It’s important that we restore 
> service if possible, though we will only do so if we’re confident that the 
> TLS-SNI-01 challenge type is sufficiently secure.
> 
> At this time, we believe that the issue can be addressed by having certain 
> services providers implement stronger controls for domains hosted on their 
> infrastructure. We have been in touch with the providers we know to be 
> affected, and mitigations will start being deployed for their systems shortly.
> 
> Over the next 48 hours we will be building a list of vulnerable providers and 
> their associated IP addresses. Our tentative plan, once the list is 
> completed, is to re-enable the TLS-SNI-01 challenge type with vulnerable 
> providers blocked from using it.
> 
> We’re also going to be soliciting feedback on our plans from our community, 
> partners and other PKI stakeholders prior to re-enabling the TLS-SNI-01 
> challenge. There is a lot to consider here and we’re looking forward to 
> feedback.
> 
> We will post more information and details as our plans progress.

As others have mentioned, the transparency in the disclosure and quick response 
is applaudable. However, it doesn't mention anything about whether anyone has 
exploited this already. Have you started analyzing your existing certs to see 
if any may have been mis-issued? If so, how?

Thanks,
Santhan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Ryan Sleevi via dev-security-policy
On Wed, Jan 10, 2018 at 1:51 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I acknowledge that the TLS-SNI-02 improvements do eliminate certain risks
> of the TLS-SNI-01 validation method -- and they do at least restore a
> promise that the answering TLS infrastructure to which the validation
> request is being made has been modified/configured/affected by the party
> who requested the certificate / validation, there does remain a significant
> gap.  I'll discuss that below in my response to your commentary on the
> state of web hosting practices.
>

I think it's important to point out that these levels of technical
discussions are best directed to the IETF ACME WG, under the auspices of
the IETF NoteWell - https://datatracker.ietf.org/wg/acme/about/


> To the extent that this is true, I harbor significant concern that
> TLS-SNI-01 could responsibly return to use.
>
> I also see a possibility that the mitigations in TLS-SNI-02 may be
> ineffective in this case.  TLS-SNI-02 would prevent naive and automatic
> accidental success of validations by some infrastructure, but an attacker
> who can still create the proper zone in .acme.invalid and upload a custom
> certificate to be served for this zone would still be able to succeed at
> validation.
>

Can you explain what you mean by 'create a proper zone'? .invalid is an
explicitly reserved TLD.


> However, even that plan only actually gains security if the hosting
> infrastructure would generally apply protection for heretofore unknown
> names which are children of existing boarded named on another customer's
> account.  In other words, how likely is it that if I have a login at some
> hosting company, and I have boarded on my account a hosting zone that
> includes the labels www.example.com and example.com that a totally
> separate
> login would be allowed to prospectively create a zone called
> notreallyexample.example.com?  If that's likely or even non-rare, there's
> still a problem with the mechanism.
>
>
It is likely and non-rare (infact, quite common as it turns out). There are
very few that match domain authorizations in some way. Note that this is
further 'difficult' because it would also require cloud providers be aware
of the tree-walking notion of authorization domain name.

So I don't think this buys any improvement over the status quo, and
actually makes it considerably more complex and failure prone, due to the
cross-sectional lookups, versus the fact that .invalid is a reserved TLD.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Changes to CA Program - Q1 2018

2018-01-10 Thread Kathleen Wilson via dev-security-policy

On 1/10/18 10:52 AM, Doug Beattie wrote:

Thanks Kathleen.  I only asked because you are trying to reduce the manpower for 
processing applications, and if a CA was already in the program there might not be a need 
to do as much.  But on the other hand, this forces us to all comply with those pesky set 
of questions in "CA/Forbidden or Problematic Practices" that we've ignored and 
forces a formal review of the CPS.


Correct, the root inclusion/update process is this way on purpose, so 
that CAs have to evaluate their practices and documentation, and fix 
their problems with compliance to Mozilla's policy and the BRs.


Thanks,
Kathleen

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Changes to CA Program - Q1 2018

2018-01-10 Thread Doug Beattie via dev-security-policy
Thanks Kathleen.  I only asked because you are trying to reduce the manpower 
for processing applications, and if a CA was already in the program there might 
not be a need to do as much.  But on the other hand, this forces us to all 
comply with those pesky set of questions in "CA/Forbidden or Problematic 
Practices" that we've ignored and forces a formal review of the CPS.

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Kathleen Wilson via dev-security-policy
> Sent: Wednesday, January 10, 2018 1:45 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: Changes to CA Program - Q1 2018
> 
> > Is the same process used for existing CAs that need to add a new root and
> new CAs applying for the first time?
> 
> Yes.
> 
>  From
> https://wiki.mozilla.org/CA/Application_Process#Process_Overview
> ""
> The same process is used to request:
> - Root certificate inclusion for all CAs, even if the CA already has root
> certificates included in Mozilla's root store
> - Turning on additional trust bits for an already-included root certificate
> - Enabling EV treatment for an already-included root certificate
> - Including a renewed version of an already-included root certificate ""
> 
> Kathleen
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jan 10, 2018 at 12:00 PM, Wayne Thayer  wrote:

> ficant difference here.  At a minimum the original request
>> arrives on port 80 and with a proper Host: header identifying the target
>> website to be validated.  Yes, it's possible that your host redirects
>> that,
>> but presumably you the website at that address have some say or control
>> over that.  Furthermore, at a minimum the target being forwarded to still
>> has to have knowledge of a calculated challenge value to return to the
>> validator which the validator does not reveal in the process of raising
>> the
>> question.  A fact which arises from this is that the target was being
>> manipulated by the requestor of the validation -- a fact which some modes
>> of failure of the TLS-SNI-01 mechanism would not be able to assert.  The
>> TLS-SNI-01 validation process never even surfaces to the hosting
>> infrastructure just exactly what domain label is being validated.
>>
>> Although the BRs allow method 6 to be performed over TLS, my
> understanding is that Let's Encrypt only supports the HTTP-01 mechanism on
> port 80 in order to prevent the exploit that Gerv described. Similarly, my
> understanding is that the updated TLS-SNI-02 mechanism prevents the attack
> that Matthew described.
>

I acknowledge that the TLS-SNI-02 improvements do eliminate certain risks
of the TLS-SNI-01 validation method -- and they do at least restore a
promise that the answering TLS infrastructure to which the validation
request is being made has been modified/configured/affected by the party
who requested the certificate / validation, there does remain a significant
gap.  I'll discuss that below in my response to your commentary on the
state of web hosting practices.


>>
>> Here again, I think we have a problem.  It's regarded as normal and
>> acceptable at many web host infrastructures to pre-stage sites for
>> domain-labels not yet in use to allow for development and test deployment.
>> Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
>> direct the "dev and test browser" to the right infrastructure for the
>> pending label.  It will be an uphill battle to get arbitrary web hosts to
>> implement any one of the mitigations you've set out.  Especially when it
>> reduces a functionality some of their clients like and doesn't seem to get
>> them any tangible benefit.
>>
>> I agree with this point. It's common and by design for shared hosting
> environments to allow sites to exist without any sort of domain name
> validation.
>
>

To the extent that this is true, I harbor significant concern that
TLS-SNI-01 could responsibly return to use.

I also see a possibility that the mitigations in TLS-SNI-02 may be
ineffective in this case.  TLS-SNI-02 would prevent naive and automatic
accidental success of validations by some infrastructure, but an attacker
who can still create the proper zone in .acme.invalid and upload a custom
certificate to be served for this zone would still be able to succeed at
validation.

My belief is that THAT risk could be further hedged by modifying the
mechanism, say TLS-SNI-03 to incorporate changes such that the only SAN
dnsName in the certificate is a well known child of the domain label to be
validated [ex: well-known-acme-pki.example.com for example.com validation]
and that another certificate property (description, org, org unit, ???) be
stuffed with a signed challenge response calculated over some derivation of
input of the challenge token generated by the CA and transmitted to the
requestor and the requestor's account key.

However, even that plan only actually gains security if the hosting
infrastructure would generally apply protection for heretofore unknown
names which are children of existing boarded named on another customer's
account.  In other words, how likely is it that if I have a login at some
hosting company, and I have boarded on my account a hosting zone that
includes the labels www.example.com and example.com that a totally separate
login would be allowed to prospectively create a zone called
notreallyexample.example.com?  If that's likely or even non-rare, there's
still a problem with the mechanism.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Changes to CA Program - Q1 2018

2018-01-10 Thread Kathleen Wilson via dev-security-policy

Is the same process used for existing CAs that need to add a new root and new 
CAs applying for the first time?


Yes.

From
https://wiki.mozilla.org/CA/Application_Process#Process_Overview
""
The same process is used to request:
- Root certificate inclusion for all CAs, even if the CA already has 
root certificates included in Mozilla's root store

- Turning on additional trust bits for an already-included root certificate
- Enabling EV treatment for an already-included root certificate
- Including a renewed version of an already-included root certificate
""

Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Wayne Thayer via dev-security-policy
On Wed, Jan 10, 2018 at 10:39 AM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wed, Jan 10, 2018 at 11:24 AM, Gervase Markham via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> >
> > I don't think that's true. If your hosting provider allows other sites
> > to respond to HTTP requests for your domain, there's a similar
> > vulnerability in the HTTP-01 checker. One configuration where this can
> > happen is when multiple sites share an IP but only one gets port 443
> > (i.e. the pre-SNI support situation), and it's not you.
> >
> >
> There's a significant difference here.  At a minimum the original request
> arrives on port 80 and with a proper Host: header identifying the target
> website to be validated.  Yes, it's possible that your host redirects that,
> but presumably you the website at that address have some say or control
> over that.  Furthermore, at a minimum the target being forwarded to still
> has to have knowledge of a calculated challenge value to return to the
> validator which the validator does not reveal in the process of raising the
> question.  A fact which arises from this is that the target was being
> manipulated by the requestor of the validation -- a fact which some modes
> of failure of the TLS-SNI-01 mechanism would not be able to assert.  The
> TLS-SNI-01 validation process never even surfaces to the hosting
> infrastructure just exactly what domain label is being validated.
>
> Although the BRs allow method 6 to be performed over TLS, my understanding
is that Let's Encrypt only supports the HTTP-01 mechanism on port 80 in
order to prevent the exploit that Gerv described. Similarly, my
understanding is that the updated TLS-SNI-02 mechanism prevents the attack
that Matthew described.

>
> > Or, if an email provider allows people to claim any of the special email
> > addresses, there's a similar vulnerability in email-based methods.
> >
>
> Clearly those mechanisms have that well known risk for a very long time
> now.  Certainly, I have no doubt that one can still today bootstrap their
> way to a bad certificate via these mechanisms.  I note  that LetsEncrypt
> and ACME chose to eschew those methods. I admit to merely presuming that
> those chose not to implement, at least in part, due to those risks.
>
>
> > The "don't allow acme.invalid" mitigation is the easiest one to
> > implement, but another perfectly good one would be "don't allow people
> > to deploy certs for sites they don't own or control", or even "don't
> > allow people to deploy certs for sites your other customers own or
> > control". Put that way, that doesn't seem like an unreasonable
> > requirement, does it?
> >
>
> Here again, I think we have a problem.  It's regarded as normal and
> acceptable at many web host infrastructures to pre-stage sites for
> domain-labels not yet in use to allow for development and test deployment.
> Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
> direct the "dev and test browser" to the right infrastructure for the
> pending label.  It will be an uphill battle to get arbitrary web hosts to
> implement any one of the mitigations you've set out.  Especially when it
> reduces a functionality some of their clients like and doesn't seem to get
> them any tangible benefit.
>
> I agree with this point. It's common and by design for shared hosting
environments to allow sites to exist without any sort of domain name
validation.


> In the course of adopting the 10 blessed methods, did any of the methods
> move forward with the expectation that active effort on the part of non-CA
> participants versus the status quo would be required in order to ensure the
> continuing reliability of the method?
>

In my opinion, adoption of the 10 blessed methods was only an effort to
document what CAs were already doing in practice so that the catch-all "any
other method" could be removed. There is more work to be done, as can be
seen in the current discussion of method #1 on the CAB Forum Public list.

> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jan 10, 2018 at 11:24 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> I don't think that's true. If your hosting provider allows other sites
> to respond to HTTP requests for your domain, there's a similar
> vulnerability in the HTTP-01 checker. One configuration where this can
> happen is when multiple sites share an IP but only one gets port 443
> (i.e. the pre-SNI support situation), and it's not you.
>
>
There's a significant difference here.  At a minimum the original request
arrives on port 80 and with a proper Host: header identifying the target
website to be validated.  Yes, it's possible that your host redirects that,
but presumably you the website at that address have some say or control
over that.  Furthermore, at a minimum the target being forwarded to still
has to have knowledge of a calculated challenge value to return to the
validator which the validator does not reveal in the process of raising the
question.  A fact which arises from this is that the target was being
manipulated by the requestor of the validation -- a fact which some modes
of failure of the TLS-SNI-01 mechanism would not be able to assert.  The
TLS-SNI-01 validation process never even surfaces to the hosting
infrastructure just exactly what domain label is being validated.


> Or, if an email provider allows people to claim any of the special email
> addresses, there's a similar vulnerability in email-based methods.
>

Clearly those mechanisms have that well known risk for a very long time
now.  Certainly, I have no doubt that one can still today bootstrap their
way to a bad certificate via these mechanisms.  I note  that LetsEncrypt
and ACME chose to eschew those methods. I admit to merely presuming that
those chose not to implement, at least in part, due to those risks.


> The "don't allow acme.invalid" mitigation is the easiest one to
> implement, but another perfectly good one would be "don't allow people
> to deploy certs for sites they don't own or control", or even "don't
> allow people to deploy certs for sites your other customers own or
> control". Put that way, that doesn't seem like an unreasonable
> requirement, does it?
>

Here again, I think we have a problem.  It's regarded as normal and
acceptable at many web host infrastructures to pre-stage sites for
domain-labels not yet in use to allow for development and test deployment.
Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
direct the "dev and test browser" to the right infrastructure for the
pending label.  It will be an uphill battle to get arbitrary web hosts to
implement any one of the mitigations you've set out.  Especially when it
reduces a functionality some of their clients like and doesn't seem to get
them any tangible benefit.

In the course of adopting the 10 blessed methods, did any of the methods
move forward with the expectation that active effort on the part of non-CA
participants versus the status quo would be required in order to ensure the
continuing reliability of the method?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Gervase Markham via dev-security-policy
On 10/01/18 17:04, Matthew Hardeman wrote:
> That seems remarkably deficient.  No other validation mechanism which is
> accepted by the community relies upon specific preventative behavior by any
> number of random hosting companies on the internet.

I don't think that's true. If your hosting provider allows other sites
to respond to HTTP requests for your domain, there's a similar
vulnerability in the HTTP-01 checker. One configuration where this can
happen is when multiple sites share an IP but only one gets port 443
(i.e. the pre-SNI support situation), and it's not you.

Or, if an email provider allows people to claim any of the special email
addresses, there's a similar vulnerability in email-based methods.

The "don't allow acme.invalid" mitigation is the easiest one to
implement, but another perfectly good one would be "don't allow people
to deploy certs for sites they don't own or control", or even "don't
allow people to deploy certs for sites your other customers own or
control". Put that way, that doesn't seem like an unreasonable
requirement, does it?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
I applaud LetsEncrypt for disclosing rapidly and thoroughly.

During the period after the initial announcement and before the full
report, I quickly read the ACME spec portion pertaining to TLS-SNI-01.

I had not previously read the details of that validation method as that
method was not once I intended to utilize.

Upon reading, I was surprised that the mechanism had survived scrutiny to
make it through to industry adoption and production use.

There exists an unambiguous comparative deficiency between the TLS-SNI-01
validation mechanism and every other validation mechanism presently
utilized by LetsEncrypt:

Specifically, the portion of the protocol which validates connection to the
infrastructure that responds for a given domain label presents the entire
value of the correct "answer" to the challenge within the question itself
(the TLS SNI name indication toward the server that the DNS says the
domain-label being tested is at.)

The result of this is that we can definitively assert that the TLS-SNI-01
protocol provides no evidence that the party who requested the validation
(and would receive the certificate) is the party responsible for the answer
which arises from the infrastructure that the DNS says is the right
infrastructure for a given domain label.

Furthermore, it would not be shocking that a plausible design for a load
balancer or hosting infrastructure might on-demand generate a self-signed
or corporate CA signed certificate for a heretofore unknown to the
infrastructure domain label as surfaced in the TLS SNI name value.  That
would "just work" in terms of validating any TLS-SNI-01 challenge on behalf
of any outside party who happens to know that a given domain label is
directed in the DNS to infrastructure of that behavioral mode.

LetsEncrypt has been such a shining beacon of good practice in this space
that I feel that many -- certainly it is my own opinion -- view LetsEncrypt
as a "best practices" model CA for domain validation.  The continuance of
the TLS-SNI-01 validation method, to my mind, would be a marked departure
from that position.

I believe LetsEncrypt should give careful consideration to the reputational
risks involved.  Now that the mode of the problem with this method is in
the public mind, there will be detractors looking to achieve a publishable
mis-issuance.  LetsEncrypt's proposed plan to work with hosting service
providers on the Internet seems naive in that light.  Participants in that
market come and go all the time.  If the plan for returning TLS-SNI-01 to
sufficient integrity for reliance by the WebPKI requires affirmative effort
on the part of an uncountable number of current and future participants in
the hosting space...  I do not mean to be rude, but are you saying this
with a straight face?

Just my thoughts...

Matt Hardeman



On Wed, Jan 10, 2018 at 3:33 AM, josh--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> At approximately 5 p.m. Pacific time on January 9, 2018, we received a
> report from Frans Rosén of Detectify outlining a method of exploiting some
> shared hosting infrastructures to obtain certificates for domains he did
> not control, by making use of the ACME TLS-SNI-01 challenge type. We
> quickly confirmed the issue and mitigated it by entirely disabling
> TLS-SNI-01 validation in Let’s Encrypt. We’re grateful to Frans for finding
> this issue and reporting it to us.
>
> We’d like to describe the issue and our plans for possibly re-enabling
> TLS-SNI-01 support.
>
> Problem Summary
>
> In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA)
> validates a domain name by generating a random token and communicating it
> to the ACME client. The ACME client uses that token to create a self-signed
> certificate with a specific, invalid hostname (for example,
> 773c7d.13445a.acme.invalid), and configures the web server on the domain
> name being validated to serve that certificate. The ACME server then looks
> up the domain name’s IP address, initiates a TLS connection, and sends the
> specific .acme.invalid hostname in the SNI extension. If the response is a
> self-signed certificate containing that hostname, the ACME client is
> considered to be in control of the domain name, and will be allowed to
> issue certificates for it.
>
> However, Frans noticed that at least two large hosting providers combine
> two properties that together violate the assumptions behind TLS-SNI:
>
> * Many users are hosted on the same IP address, and
> * Users have the ability to upload certificates for arbitrary names
> without proving domain control.
>
> When both are true of a hosting provider, an attack is possible. Suppose
> example.com’s DNS is pointed at the same shared hosting IP address as a
> site controlled by the attacker. The attacker can run an ACME client to get
> a TLS-SNI-01 challenge, then install their .acme.invalid certificate on the
> hosting provider. When the ACME server looks up example.com, it will
> 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jan 10, 2018 at 10:35 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Hosting providers can simply refuse to accept uploads of any certificate
> which contains names ending in "acme.invalid".
>
> AIUI, this is Let's Encrypt's recommended mitigation method.
>
> Gerv
>
>
That seems remarkably deficient.  No other validation mechanism which is
accepted by the community relies upon specific preventative behavior by any
number of random hosting companies on the internet.

Why would that suffice?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Jakob Bohm via dev-security-policy

On 10/01/2018 16:38, ssimon.g...@gmail.com wrote:

On Wednesday, January 10, 2018 at 3:34:51 PM UTC+1, Jakob Bohm wrote:

Depending on exactly how the shared web server is misconfigured


I don't think the web server is misconfigured: serving a self signed cert for 
any domain - even one that I don't own - is something that is absolutely valid 
and done for test purposes.


Enforcement on shared hosting systems would be easier if the TLS-SNI-01
ACME mechanism used names such as
1234556-24356476._acme.requested.domain.example.com
since that would allow hosting providers to restrict certificate uploads
that claim to be for other customers domains.  Maybe the name form used
by TLS-SNI-02 could be the same as for the DNS-01 challenge.


I think that the assumptions TLS-SNI-01/2 make are not valid:
- it assumes that you control the IP address the domain resolves to, AND
- it assumes that the tls certificate returned by the web server responding on 
that IP is your own.

Those two assumptions are not valid, as SNI is designed exactly for the use 
case of multiple domains on the same IP, and shared hosts are just providers 
for that use case.

IMHO, returning a self signed cert the IP address that domain resolves to, 
should not be proof of ownership for that domain.



It is (with this special exception) as much proof as putting a serving a
magic file from the webserver at this IP address.

The two possible shared hosting configurations causing problems are:

a) The ability to upload a certificate for *another user's* domain.

b) The ability to upload a certificate for a non-hosted domain.

b is actually a valid thing to do, especially if the certificate
 contains SAN values for both the uploader's domain and a
 non-conflicting domain (that the uploader might be hosting
 elsewhere).  Which is why the TLS-SNI-01 test using a non-existent
 (and thus never hosted) domain fails badly on shared hosting.

Enforcing restrictions against a also prevents existing attacks, such as
uploading a less trusted certificate for another user as a local DoS
attack.

Adding a special ban, just to please Let's Encrypt (and the new ACME
providers launched recently), is on the other hand a classic example
of arbitrary annoyances as far as hosting environments not using them
(at the hoster level) is concerned.  I fear that a lot of hosting
environments will be belligerent and insist that they have no obligation
to honor the request.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Gervase Markham via dev-security-policy
On 10/01/18 14:34, Jakob Bohm wrote:
> Enforcement on shared hosting systems would be easier if the TLS-SNI-01
> ACME mechanism used names such as
>   1234556-24356476._acme.requested.domain.example.com
> since that would allow hosting providers to restrict certificate uploads
> that claim to be for other customers domains.  

Hosting providers can simply refuse to accept uploads of any certificate
which contains names ending in "acme.invalid".

AIUI, this is Let's Encrypt's recommended mitigation method.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread ssimon.gdev--- via dev-security-policy
On Wednesday, January 10, 2018 at 3:34:51 PM UTC+1, Jakob Bohm wrote:
> Depending on exactly how the shared web server is misconfigured

I don't think the web server is misconfigured: serving a self signed cert for 
any domain - even one that I don't own - is something that is absolutely valid 
and done for test purposes.

> Enforcement on shared hosting systems would be easier if the TLS-SNI-01
> ACME mechanism used names such as
>1234556-24356476._acme.requested.domain.example.com
> since that would allow hosting providers to restrict certificate uploads
> that claim to be for other customers domains.  Maybe the name form used
> by TLS-SNI-02 could be the same as for the DNS-01 challenge.

I think that the assumptions TLS-SNI-01/2 make are not valid:
- it assumes that you control the IP address the domain resolves to, AND
- it assumes that the tls certificate returned by the web server responding on 
that IP is your own.

Those two assumptions are not valid, as SNI is designed exactly for the use 
case of multiple domains on the same IP, and shared hosts are just providers 
for that use case.

IMHO, returning a self signed cert the IP address that domain resolves to, 
should not be proof of ownership for that domain.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Changes to CA Program - Q1 2018

2018-01-10 Thread Gervase Markham via dev-security-policy
On 10/01/18 00:23, Kathleen Wilson wrote:
> I would like to thank Aaron Wu for all of his help on our CA Program,
> and am sorry to say that his last day at Mozilla will be January 12. I
> have appreciated all of Aaron’s work, and it has been a pleasure to work
> with him.

Seconded.

> I think this is a good time for us to make some changes to Mozilla’s
> Root Inclusion Process to improve the effectiveness of the public
> discussion phase by performing the detailed CP/CPS review prior to the
> public discussion. The goal of this change is to focus the discussion
> period on gathering community input and to allow the process to continue
> when no objections are raised.

This seems fine to me.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Jakob Bohm via dev-security-policy

On 10/01/2018 14:15, Kurt Roeckx wrote:

On Wed, Jan 10, 2018 at 01:33:20AM -0800, josh--- via dev-security-policy wrote:

* Users have the ability to upload certificates for arbitrary names without 
proving domain control.


So a user can always take over the domain of an other user on
those providers just by installing a (self-signed) certificate?
I guess it works easiest if the other just doesn't have SSL.




Depending on exactly how the shared web server is misconfigured, it
still might direct the traffic of actual (real) hostnames of other users
to the correct user account, even if matching the SNI to the rogue
certificate).  This boils down to the fact that many web servers use
neither the client-supplied SNI value nor the list of certificate SAN
DNS values as an alternative / override / filter for the HTTP/1.x Host:
header and/or the HTTP full URL in request option.

It is also quite possible that a number of affected hosting systems will
only allow this for domains not already hosted by another user (such as
acme.invalid).

Enforcement on shared hosting systems would be easier if the TLS-SNI-01
ACME mechanism used names such as
  1234556-24356476._acme.requested.domain.example.com
since that would allow hosting providers to restrict certificate uploads
that claim to be for other customers domains.  Maybe the name form used
by TLS-SNI-02 could be the same as for the DNS-01 challenge.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Patrick Figel via dev-security-policy
First of all: Thanks for the transparency, the detailed report and quick
response to this incident.

A user on Hacker News brought up the possibility that the fairly popular
DirectAdmin control panel might also demonstrate the problematic
behaviour mentioned in your report[1].

I successfully reproduced this on a shared web hosting provider that
uses DirectAdmin. The control panel allowed me to set the vhost domain
to a value like "12345.54321.acme.invalid" and to deploy a self-signed
certificate that included this domain. The web server responded with
said certificate given the following request:

openssl s_client -servername 12345.54321.acme.invalid -connect
192.0.2.0:443 -showcerts

I did not perform an end-to-end test against a real ACME server, but my
understanding is that this would be enough to issue a certificate for
any other domain on the same IP address.

I couldn't find any public data on DirectAdmin's market share, but I
would expect a fairly large number of domains to be affected.

It might also be worth investigating whether other control panels are
similarly affected.

Patrick

[1]: https://news.ycombinator.com/item?id=16114181

On 10.01.18 10:33, josh--- via dev-security-policy wrote:
> At approximately 5 p.m. Pacific time on January 9, 2018, we received a report 
> from Frans Rosén of Detectify outlining a method of exploiting some shared 
> hosting infrastructures to obtain certificates for domains he did not 
> control, by making use of the ACME TLS-SNI-01 challenge type. We quickly 
> confirmed the issue and mitigated it by entirely disabling TLS-SNI-01 
> validation in Let’s Encrypt. We’re grateful to Frans for finding this issue 
> and reporting it to us.
> 
> We’d like to describe the issue and our plans for possibly re-enabling 
> TLS-SNI-01 support.
> 
> Problem Summary
> 
> In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA) 
> validates a domain name by generating a random token and communicating it to 
> the ACME client. The ACME client uses that token to create a self-signed 
> certificate with a specific, invalid hostname (for example, 
> 773c7d.13445a.acme.invalid), and configures the web server on the domain name 
> being validated to serve that certificate. The ACME server then looks up the 
> domain name’s IP address, initiates a TLS connection, and sends the specific 
> .acme.invalid hostname in the SNI extension. If the response is a self-signed 
> certificate containing that hostname, the ACME client is considered to be in 
> control of the domain name, and will be allowed to issue certificates for it.
> 
> However, Frans noticed that at least two large hosting providers combine two 
> properties that together violate the assumptions behind TLS-SNI:
> 
> * Many users are hosted on the same IP address, and
> * Users have the ability to upload certificates for arbitrary names without 
> proving domain control.
> 
> When both are true of a hosting provider, an attack is possible. Suppose 
> example.com’s DNS is pointed at the same shared hosting IP address as a site 
> controlled by the attacker. The attacker can run an ACME client to get a 
> TLS-SNI-01 challenge, then install their .acme.invalid certificate on the 
> hosting provider. When the ACME server looks up example.com, it will connect 
> to the hosting provider’s IP address and use SNI to request the .acme.invalid 
> hostname. The hosting provider will serve the certificate uploaded by the 
> attacker. The ACME server will then consider the attacker’s ACME client 
> authorized to issue certificates for example.com, and be willing to issue a 
> certificate for example.com even though the attacker doesn’t actually control 
> it.
> 
> This issue only affects domain names that use hosting providers with the 
> above combination of properties. It is independent of whether the hosting 
> provider itself acts as an ACME client.
> 
> Our Plans
> 
> Shortly after the issue was reported, we disabled TLS-SNI-01 in Let’s 
> Encrypt. However, a large number of people and organizations use the 
> TLS-SNI-01 challenge type to get certificates. It’s important that we restore 
> service if possible, though we will only do so if we’re confident that the 
> TLS-SNI-01 challenge type is sufficiently secure.
> 
> At this time, we believe that the issue can be addressed by having certain 
> services providers implement stronger controls for domains hosted on their 
> infrastructure. We have been in touch with the providers we know to be 
> affected, and mitigations will start being deployed for their systems shortly.
> 
> Over the next 48 hours we will be building a list of vulnerable providers and 
> their associated IP addresses. Our tentative plan, once the list is 
> completed, is to re-enable the TLS-SNI-01 challenge type with vulnerable 
> providers blocked from using it.
> 
> We’re also going to be soliciting feedback on our plans from our community, 
> partners and other PKI stakeholders prior to 

Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Dmitry Belyavsky via dev-security-policy
Hello,

On Wed, Jan 10, 2018 at 4:15 PM, Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wed, Jan 10, 2018 at 01:33:20AM -0800, josh--- via dev-security-policy
> wrote:
> > * Users have the ability to upload certificates for arbitrary names
> without proving domain control.
>
> So a user can always take over the domain of an other user on
> those providers just by installing a (self-signed) certificate?
> I guess it works easiest if the other just doesn't have SSL.
>

If SSL is off, hosting may not include SSL-related directives in the config
of webserver at the machine at all.


-- 
SY, Dmitry Belyavsky
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Kurt Roeckx via dev-security-policy
On Wed, Jan 10, 2018 at 01:33:20AM -0800, josh--- via dev-security-policy wrote:
> * Users have the ability to upload certificates for arbitrary names without 
> proving domain control.

So a user can always take over the domain of an other user on 
those providers just by installing a (self-signed) certificate?
I guess it works easiest if the other just doesn't have SSL.


Kurt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Changes to CA Program - Q1 2018

2018-01-10 Thread Doug Beattie via dev-security-policy
Hi Kathleen,

Is the same process used for existing CAs that need to add a new root and new 
CAs applying for the first time?  

Doug

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Kathleen
> Wilson via dev-security-policy
> Sent: Tuesday, January 9, 2018 7:24 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Changes to CA Program - Q1 2018
> 
> All,
> 
> I would like to thank Aaron Wu for all of his help on our CA Program, and am
> sorry to say that his last day at Mozilla will be January 12. I have 
> appreciated all
> of Aaron’s work, and it has been a pleasure to work with him.
> 
> I will be re-assigning all of the root inclusion/update Bugzilla Bugs back to 
> me,
> and I will take back responsibility for the high-level verification of the CA-
> provided data for root inclusion/update requests.
> I will also take back responsibility for verifying CA annual updates, and we 
> will
> continue to work to improve that process and automation via the CCADB.
> 
> Wayne Thayer, Gerv Markham, and Ryan Sleevi have already taken
> responsibility for the CA Incident bugs
> (https://wiki.mozilla.org/CA/Incident_Dashboard). Thankfully, many of you
> members of the CA Community are helping with this effort.
> 
> Wayne and Devon O’Brien will take responsibility for ensuring that thorough
> reviews of CA root inclusion/update requests happen (see below), and Wayne
> will be responsible for the discussion phase of CA root inclusion/update
> requests. We greatly appreciate all of the input that you all provide during 
> the
> discussions of these requests, and are especially grateful for the thorough
> reviews that have been performed and documented, with special thanks to
> Ryan Sleevi, Andrew Whalley, and Devon O’Brien.
> 
> I think this is a good time for us to make some changes to Mozilla’s Root
> Inclusion Process to improve the effectiveness of the public discussion phase 
> by
> performing the detailed CP/CPS review prior to the public discussion. The 
> goal of
> this change is to focus the discussion period on gathering community input and
> to allow the process to continue when no objections are raised.
> 
> As such, I propose that we make the following changes to
> https://wiki.mozilla.org/CA/Application_Process#Process_Overview
> 
> ~~ PROPOSED CHANGES ~~
> 
> Step 1: A representative of the CA submits the request via Bugzilla and 
> provides
> the information a listed in https://wiki.mozilla.org/CA/Information_Checklist.
> 
> * Immediate change: None
> 
> * Future change: CAs will directly input their Information Checklist data 
> into the
> CCADB.
> All root inclusion/update requests will begin with a Bugzilla Bug, as we do 
> today.
> However, we will create a process by which CAs will be responsible for 
> entering
> and updating their own data in the CCADB for their request.
> 
> Step 2: A representative of Mozilla verifies the information provided by the 
> CA.
> 
> * Immediate change: None
> This will continue to be a high-level review to make sure that all of the 
> required
> data has been provided, per the Information Checklist, and that the required
> tests have been performed.
> 
> * Future change: Improvements/automation in CCADB for verifying this data.
> 
> Step 3: A representative of Mozilla adds the request to the queue for public
> discussion.
> 
> * Immediate change: Replace this step as follows.
> NEW Step 3: A representative of Mozilla or of the CA Community (as agreed by a
> Mozilla representative) thoroughly reviews the CA’s documents, and adds a
> Comment in the Bugzilla Bug about their findings.
> If the CA has everything in order, then the Comment will be that the request
> may proceed, and the request will be added to the queue for public discussion.
> Otherwise the Comment will list actions that the CA must complete. This may
> include, but is not limited to fixing certificate content, updating process,
> updating the CP/CPS, and obtaining new audit statements. The list of actions 
> will
> be categorized into one of the following 3 groups:
>--- 1: Must be completed before this request may proceed.
>--- 2: Must be completed before this request may be approved, but the 
> request
> may continue through the public discussion step in parallel with the CA
> completing their action items.
>--- 3: Must be completed before the CA’s next annual audit, but the request
> may continue through the rest of the approval/inclusion process.
> 
> Step 4: Anyone interested in the CA's application participates in discussions 
> of CA
> requests currently in discussion in the mozilla.dev.security.policy forum.
> 
> * Immediate Change: Delete this step from the wiki page, because it is a 
> general
> statement that does not belong here.
> 
> Step 5: When the application reaches the head of the queue, a representative 
> of
> Mozilla starts the public discussion 

Re: Potential problem with ACME TLS-SNI-01 validation

2018-01-10 Thread Gervase Markham via dev-security-policy
On 10/01/18 02:26, j...@letsencrypt.org wrote:
> We've received a credible report of a problem with ACME TLS-SNI-01 validation 
> which could allow people to get certificates they should not be able to get. 
> While we investigate further we have disabled tls-sni-01 validation.
> 
> We'll post more information soon.

https://community.letsencrypt.org/t/2018-01-09-issue-with-tls-sni-01-and-shared-hosting-infrastructure/49996

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread josh--- via dev-security-policy
At approximately 5 p.m. Pacific time on January 9, 2018, we received a report 
from Frans Rosén of Detectify outlining a method of exploiting some shared 
hosting infrastructures to obtain certificates for domains he did not control, 
by making use of the ACME TLS-SNI-01 challenge type. We quickly confirmed the 
issue and mitigated it by entirely disabling TLS-SNI-01 validation in Let’s 
Encrypt. We’re grateful to Frans for finding this issue and reporting it to us.

We’d like to describe the issue and our plans for possibly re-enabling 
TLS-SNI-01 support.

Problem Summary

In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA) validates 
a domain name by generating a random token and communicating it to the ACME 
client. The ACME client uses that token to create a self-signed certificate 
with a specific, invalid hostname (for example, 773c7d.13445a.acme.invalid), 
and configures the web server on the domain name being validated to serve that 
certificate. The ACME server then looks up the domain name’s IP address, 
initiates a TLS connection, and sends the specific .acme.invalid hostname in 
the SNI extension. If the response is a self-signed certificate containing that 
hostname, the ACME client is considered to be in control of the domain name, 
and will be allowed to issue certificates for it.

However, Frans noticed that at least two large hosting providers combine two 
properties that together violate the assumptions behind TLS-SNI:

* Many users are hosted on the same IP address, and
* Users have the ability to upload certificates for arbitrary names without 
proving domain control.

When both are true of a hosting provider, an attack is possible. Suppose 
example.com’s DNS is pointed at the same shared hosting IP address as a site 
controlled by the attacker. The attacker can run an ACME client to get a 
TLS-SNI-01 challenge, then install their .acme.invalid certificate on the 
hosting provider. When the ACME server looks up example.com, it will connect to 
the hosting provider’s IP address and use SNI to request the .acme.invalid 
hostname. The hosting provider will serve the certificate uploaded by the 
attacker. The ACME server will then consider the attacker’s ACME client 
authorized to issue certificates for example.com, and be willing to issue a 
certificate for example.com even though the attacker doesn’t actually control 
it.

This issue only affects domain names that use hosting providers with the above 
combination of properties. It is independent of whether the hosting provider 
itself acts as an ACME client.

Our Plans

Shortly after the issue was reported, we disabled TLS-SNI-01 in Let’s Encrypt. 
However, a large number of people and organizations use the TLS-SNI-01 
challenge type to get certificates. It’s important that we restore service if 
possible, though we will only do so if we’re confident that the TLS-SNI-01 
challenge type is sufficiently secure.

At this time, we believe that the issue can be addressed by having certain 
services providers implement stronger controls for domains hosted on their 
infrastructure. We have been in touch with the providers we know to be 
affected, and mitigations will start being deployed for their systems shortly.

Over the next 48 hours we will be building a list of vulnerable providers and 
their associated IP addresses. Our tentative plan, once the list is completed, 
is to re-enable the TLS-SNI-01 challenge type with vulnerable providers blocked 
from using it.

We’re also going to be soliciting feedback on our plans from our community, 
partners and other PKI stakeholders prior to re-enabling the TLS-SNI-01 
challenge. There is a lot to consider here and we’re looking forward to 
feedback.

We will post more information and details as our plans progress.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy