Re: 2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

2018-01-10 Thread Patrick Figel via dev-security-policy
First of all: Thanks for the transparency, the detailed report and quick
response to this incident.

A user on Hacker News brought up the possibility that the fairly popular
DirectAdmin control panel might also demonstrate the problematic
behaviour mentioned in your report[1].

I successfully reproduced this on a shared web hosting provider that
uses DirectAdmin. The control panel allowed me to set the vhost domain
to a value like "12345.54321.acme.invalid" and to deploy a self-signed
certificate that included this domain. The web server responded with
said certificate given the following request:

openssl s_client -servername 12345.54321.acme.invalid -connect
192.0.2.0:443 -showcerts

I did not perform an end-to-end test against a real ACME server, but my
understanding is that this would be enough to issue a certificate for
any other domain on the same IP address.

I couldn't find any public data on DirectAdmin's market share, but I
would expect a fairly large number of domains to be affected.

It might also be worth investigating whether other control panels are
similarly affected.

Patrick

[1]: https://news.ycombinator.com/item?id=16114181

On 10.01.18 10:33, josh--- via dev-security-policy wrote:
> At approximately 5 p.m. Pacific time on January 9, 2018, we received a report 
> from Frans Rosén of Detectify outlining a method of exploiting some shared 
> hosting infrastructures to obtain certificates for domains he did not 
> control, by making use of the ACME TLS-SNI-01 challenge type. We quickly 
> confirmed the issue and mitigated it by entirely disabling TLS-SNI-01 
> validation in Let’s Encrypt. We’re grateful to Frans for finding this issue 
> and reporting it to us.
> 
> We’d like to describe the issue and our plans for possibly re-enabling 
> TLS-SNI-01 support.
> 
> Problem Summary
> 
> In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA) 
> validates a domain name by generating a random token and communicating it to 
> the ACME client. The ACME client uses that token to create a self-signed 
> certificate with a specific, invalid hostname (for example, 
> 773c7d.13445a.acme.invalid), and configures the web server on the domain name 
> being validated to serve that certificate. The ACME server then looks up the 
> domain name’s IP address, initiates a TLS connection, and sends the specific 
> .acme.invalid hostname in the SNI extension. If the response is a self-signed 
> certificate containing that hostname, the ACME client is considered to be in 
> control of the domain name, and will be allowed to issue certificates for it.
> 
> However, Frans noticed that at least two large hosting providers combine two 
> properties that together violate the assumptions behind TLS-SNI:
> 
> * Many users are hosted on the same IP address, and
> * Users have the ability to upload certificates for arbitrary names without 
> proving domain control.
> 
> When both are true of a hosting provider, an attack is possible. Suppose 
> example.com’s DNS is pointed at the same shared hosting IP address as a site 
> controlled by the attacker. The attacker can run an ACME client to get a 
> TLS-SNI-01 challenge, then install their .acme.invalid certificate on the 
> hosting provider. When the ACME server looks up example.com, it will connect 
> to the hosting provider’s IP address and use SNI to request the .acme.invalid 
> hostname. The hosting provider will serve the certificate uploaded by the 
> attacker. The ACME server will then consider the attacker’s ACME client 
> authorized to issue certificates for example.com, and be willing to issue a 
> certificate for example.com even though the attacker doesn’t actually control 
> it.
> 
> This issue only affects domain names that use hosting providers with the 
> above combination of properties. It is independent of whether the hosting 
> provider itself acts as an ACME client.
> 
> Our Plans
> 
> Shortly after the issue was reported, we disabled TLS-SNI-01 in Let’s 
> Encrypt. However, a large number of people and organizations use the 
> TLS-SNI-01 challenge type to get certificates. It’s important that we restore 
> service if possible, though we will only do so if we’re confident that the 
> TLS-SNI-01 challenge type is sufficiently secure.
> 
> At this time, we believe that the issue can be addressed by having certain 
> services providers implement stronger controls for domains hosted on their 
> infrastructure. We have been in touch with the providers we know to be 
> affected, and mitigations will start being deployed for their systems shortly.
> 
> Over the next 48 hours we will be building a list of vulnerable providers and 
> their associated IP addresses. Our tentative plan, once the list is 
> completed, is to re-enable the TLS-SNI-01 challenge type with vulnerable 
> providers blocked from using it.
> 
> We’re also going to be soliciting feedback on our plans from our community, 
> partners and other PKI stakeholders prior to 

Re: Verisign signed speedport.ip ?

2017-12-09 Thread Patrick Figel via dev-security-policy
Until November 11, 2015, publicly-trusted CAs were allowed to issue
certificates for internal names and reserved IP addresses. All
certificates of this nature had to be revoked by October 1, 2016.

More details here: https://cabforum.org/internal-names/

Patrick

On 09.12.17 20:42, Lewis Resmond via dev-security-policy wrote:
> Hello,
> 
> I was researching about some older routers by Telekom, and I found out that 
> some of them had SSL certificates for their (LAN) configuration interface, 
> issued by Verisign for the fake-domain "speedport.ip".
> 
> They (all?) are logged here: https://crt.sh/?q=speedport.ip
> 
> I wonder, since this domain and even the TLD is non-existing, how could 
> Verisign sign these? Isn't this violating the rules, if they sign anything 
> just because a router factory tells them to do so?
> 
> Although they are all expired since several years, I am interested how this 
> could happen, and if such incidents of signing non-existing domains could 
> still happen today.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
> 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-09-28 Thread Patrick Figel via dev-security-policy
On 28.09.17 19:06, Gervase Markham via dev-security-policy wrote:
> On 26/09/17 03:17, Ryan Sleevi wrote:
>> update in a year, are arguably outside of the scope of ‘reasonable’ use
>> cases - the ecosystem itself has shown itself to change on at least that
>> frequency.
> 
> Is "1 year" not a relatively common (for some value of "common") setting
> for HPKP timeouts for sites which think they have now mastered HPKP?

IIRC both Chrome and Firefox cap the max-age value of HPKP at 60 days.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PROCERT issues

2017-09-21 Thread Patrick Figel via dev-security-policy
On 21/09/2017 23:08, alejandrovolcan--- via dev-security-policy wrote:
> Dear Gerv, I have attached a document that gives us a greater
> response to each of the points, as well as Mr. Oscar Lovera sent you
> an email with the same information
> 
> https://www.dropbox.com/s/qowngzzvg5q5pjj/Mozilla%20issues.docx?dl=0


To save everyone else some time trying to find out what has changed,
these are the additions for every issue, diffed against the original
response[1]:



Issue D:

PROCERT initially claimed that this was entirely RFC-compliant, a claim
comprehensively rebutted by Ryan Sleevi.

This certificate was issued with the modifications and here it is
possible to appreciate the evidence https://crt.sh/?id=209727657



Issue E:

This certificate is not in use and is appended to the internal
revocation schedule, the current certificate is 2048 in length and can
be seen in the attached OCSP response (ocsp.txt)



Issue G:

Please find evidence of new certificate issued without problems
https://crt.sh/?id=202869851



Issue I:

These certificates were revoked and generated again, please consult
through the following link
https://crt.sh/?iCAID=750=2000-01-01=1000=25



Issue J:

Although certificates are not certified, the certificates have been
revoked and corrected, please check the link
https://crt.sh/?iCAID=750=2000-01-01=782



Issue K:

Corrective action taken, please consult the link
https://crt.sh/?id=197158298=cablint



Issue L:

That certificate was revoked and generated again, please consult through
the following link https://crt.sh/?id=167929373



Issue M:

This is not an infringement or violation of the RFC. Regarding to the
language, the national language in Venezuela is the Spanish. We will
work to updated the English version of this document and posted in the
web page.
https://www.procert.net.ve/documentos/AC-D-0011.pdf
https://www.procert.net.ve/documentos/AC-D-0003.pdf

Standard
https://tools.ietf.org/html/rfc3647
https://tools.ietf.org/html/rfc2527



Issue N:

Taking into account what the Baseline says, it states the following:
i. Other Subject Attributes
All other optional attributes, when present within the subject field,
MUST contain information that has been verified by the CA.

Indicates that another field can be included as long as it is
information verified by the CA in this case the CA verifies the number
of RIF of each company and also the field is with an OID for that
purpose, which is 2.16.862.2.2



Issue O:

Our OCSP works without any problem in the validation with automatic
tools such as certutil, even with the same command openssl, in
consultation under standard.

The standard openssl query is done in the following way: openssl ocsp
-issuer issuer.cer -cert cert.cer -url http://ura.procert.net.ve/ocsp
-noverify -text.

We detected that OCSP does work correctly with the Microsoft tool.

Open SSL creates a problem when doing a non-standard query (hacking). We
are already working the point with Microsoft and we even have an
assigned ticket.



Issue P:

No RFC 2560 nor RFC 5280 is being violated.



Issue R:

The certificate was revoked and reissued, please see the link
https://crt.sh/?id=194225991



Issue S:

Already it was given previous answer to this point and was solved
modifying certain parameters in our CA, which counts on a CSPRNG that
acts like generator of serial numbers and was modified in its register
so that it increased the capacity and strength of the serial numbers
that generates automatically and without human intervention.



Issue T:

These certificates were revoked and generated again, please consult
through the following link
https://crt.sh/?iCAID=750=2000-01-01=1000=734
Additionally, we have already modified the certificate template to
prevent it from containing this key usage.



Issue V:

No additions.



Issue W:

This 

Re: Let's Encrypt and Wildcard Domains

2017-08-28 Thread Patrick Figel via dev-security-policy
In what way would this be a policy violation? Most CAs trusted by
Mozilla issue wildcard certificates.

Perhaps you were thinking of EV certificates? For EV, wildcard is indeed
not permitted, but Let's Encrypt does not issue EV at all.

On 29/08/2017 04:31, David E. Ross via dev-security-policy wrote:
> I just read mention that Let's Encrypt will be enabling wildcard
> domains, possibly by the end of this year.  Is this not a violation of
> Mozilla policy?
> 
> I saw this in the eternal-september.support newsgroup, which is
> available only via the news.eternal-september.org NNTP server.  The
> thread subject was "Expired Server Certificate".
> 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartCom cross-signs disclosed by Certinomis

2017-08-03 Thread Patrick Figel via dev-security-policy
On 03/08/2017 10:47, Inigo Barreira via dev-security-policy wrote> 1.
The un-revoked test certificates are those pre-sign ones with uncompleted
> ctlog. So they are not completed certificates.
> https://crt.sh/?opt=cablint=134843670
> https://crt.sh/?opt=cablint=134843674
> https://crt.sh/?opt=cablint=134843685
> https://crt.sh/?opt=cablint=139640371

My understanding of Mozilla's policy is that misissued precerts are
considered misissuance nonetheless[1].

[1]:
https://groups.google.com/d/msg/mozilla.dev.security.policy/6pBLHJBFNts/kM3kEJKMAgAJ
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Misissuance Action Items

2017-02-13 Thread Patrick Figel via dev-security-policy
On 13/02/2017 16:15, Jürgen Brauckmann via dev-security-policy wrote:
> Gervase Markham via dev-security-policy schrieb:
>> 1) As with all CAs, update all their domain validation code to use one
>> of the 10 approved methods;
> 
> I'm probably confused regarding BRs pre/post Ballot 181: Aren't there
> only 4 methods per Ballot 181?

My understanding is that while Ballot 181 included only those methods
from Ballot 169 that were not affected by any patent exclusion notices,
plus "any other method", Mozilla will disallow "any other method" and
instead ask CAs to limit their validation methods to the ones from
Ballot 169.

This approach would still be compliant with the BRs because the 6
methods affected by the patent exclusions can be counted as
implementations of "any other method".
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Appropriate role for lists of algorithms and key sizes

2017-02-07 Thread Patrick Figel
On 07/02/2017 08:11, Tom wrote:
> Le 07/02/2017 à 05:01, Patrick Figel a écrit :
>> On 27/01/2017 19:53, Ryan Sleevi wrote:
>>> On Fri, Jan 27, 2017 at 3:47 AM, Gervase Markham 
>>> <g...@mozilla.org>
> wrote:
>>>> 
>>>> * RSA keys with a minimum modulus size of 2048 bits
>>>> 
>>> 
>>> Nits and niggles: Perhaps 2048, 3072, 4096?
>>> 
>>> - 8K RSA keys cause Web PKI interop problems - RSA keys that 
>>> aren't modulo 8 create interop problems
>> 
>> It looks like a number of CAs currently accept RSA keys with 
>> modulus sizes != (2048, 3072, 4096). Censys currently finds 21,150
>> EE certs[1]. Does it make more sense to explicitly add the mod 8 
>> requirement to the policy in this case, while allowing anything >=
>> 2048 <= 4096?
>> 
>> [1]:
>> 
> https://censys.io/certificates?q=current_valid_nss%3A+true+and+parsed.subject_key_info.key_algorithm.name%3A+RSA+not+parsed.subject_key_info.rsa_public_key.length%3A+%282048+or+3092+or+4096%29=1
>> 
> 
> Why the 4096 limit?
> 
> https://rsa8192.badssl.com/ work well, and if somebody wish a 
> certificate with that size of key (or even bigger), why refusing it?
> 
> ECC certificates are allowed but cause a lot of PKI interop problems 
> too. That's not a reason for refusing it.

It's not quite the same thing - web servers can detect ECC support
through the signature_algorithms TLS client hello extension and fall
back to a RSA cert if needed. Clients don't announce the maximum key
size they support that way, so that wouldn't be possible (some fancy
client hello fingerprinting to detect the client/UA might work, but that
would be rather complex).

There are about 2k RSA certs with a key size > 4096 out there (and < 100
with > 8192). I don't personally see the need for > 4096, but don't feel
very strongly about that, so if most of the ecosystem does indeed
support 8192, I'd be fine with that limit too.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Appropriate role for lists of algorithms and key sizes

2017-02-06 Thread Patrick Figel
On 27/01/2017 19:53, Ryan Sleevi wrote:
> On Fri, Jan 27, 2017 at 3:47 AM, Gervase Markham  wrote:
>>
>> * RSA keys with a minimum modulus size of 2048 bits
>>
> 
> Nits and niggles: Perhaps 2048, 3072, 4096?
> 
> - 8K RSA keys cause Web PKI interop problems
> - RSA keys that aren't modulo 8 create interop problems

It looks like a number of CAs currently accept RSA keys with modulus
sizes != (2048, 3072, 4096). Censys currently finds 21,150 EE certs[1].
Does it make more sense to explicitly add the mod 8 requirement to the
policy in this case, while allowing anything >= 2048 <= 4096?

[1]:
https://censys.io/certificates?q=current_valid_nss%3A+true+and+parsed.subject_key_info.key_algorithm.name%3A+RSA+not+parsed.subject_key_info.rsa_public_key.length%3A+%282048+or+3092+or+4096%29=1

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incident Report – Certificates issued without proper domain validation

2017-01-11 Thread Patrick Figel
On 11/01/2017 04:08, Ryan Sleevi wrote:
> Could you speak further to how GoDaddy has resolved this problem? My
> hope is that it doesn't involve "Only look for 200 responses" =)

In case anyone is wondering why this is problematic, during the Ballot
169 review process, Peter Bowen ran a check against the top 10,000 Alexa
domains and noted that more than 400 sites returned a HTTP 200 response
for a request to
http://www.$DOMAIN/.well-known/pki-validation/4c079484040e32529577b6a5aade31c5af6fe0c7
[1]. A number of those included the URL in the response body, which
would presumably be good enough for GoDaddy's domain validation process
if they indeed only check for a HTTP 200 response.

[1]: https://cabforum.org/pipermail/public/2016-April/007506.html
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign has new roots?

2016-11-22 Thread Patrick Figel
On Tue, Nov 22, 2016 at 10:56 PM, Tobias Sachs  wrote:
> Am Dienstag, 22. November 2016 21:37:08 UTC+1 schrieb Lewis Resmond:
>> Hello,
>>
>> I just noticed following announcement by WoSign:
>>
>> https://www.wosign.com/english/News/certificate_pre.htm
>>
>> If I understand correctly, they now have new root certificates which chain 
>> up to Certum, which is in the root storage.
>>
>> What does that mean in particular? Are the previously taken sanctions now 
>> useless?
>
> According to this comment [1] I think yes. But this means also that the new 
> ca is now the target. You can find the cert mentioned there here [2] and the 
> intermediate here [3] which is not in the CT logs...

The intermediate certificates were disclosed in Mozilla's CA database[1] and are
currently filed under "CP/CPS Same As Parent" and "Audits Same As Parent".

I assume that this means Certum holds the keys for these intermediates and
WoSign is essentially acting as a reseller. I don't think that's something
Mozilla can or should object to.

I'm a bit unclear on whether WoSign could be acting as a Registration Authority
for certificates issued under that intermediate and what the auditing and
disclose requirements for that would be - maybe someone more familiar with
the BRs can comment. WoSign acting as a RA prior to finishing the re-application
process would be troubling given their previous failures in that area.

[1]: https://mozillacaprogram.secure.force.com/CA/PublicAllIntermediateCerts
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-11-03 Thread Patrick Figel
On 03/11/16 10:59, Gervase Markham wrote:
> However, I still don't get why you want to use Cloudflare's SSL
> termination services but are unwilling to allow them to get a
> certificate for your domain name.
> 
> AIUI their free tier uses certs they obtain, but if you pay, you can
> provide your own cert. So if you want to use Cloudflare but don't want
> them obtaining certs for you, join the paying tier.

It is possible to use Cloudflare as a DNS-only provider, without any
CDN/reverse proxying functionality. That's what seems to be the issue
here - certificates are requested as soon as a domain is added to
Cloudflare, even if the CDN functionality is never enabled.

I don't think these certificates are mis-issued or that this practice is
shady, but I can see how it might surprise a domain owner who is only
looking for a DNS provider.

This is probably not something that can or should be resolved by the
CA/B Forum or Mozilla. Realistically speaking, asking CAs to confirm
that the actual domain registrant has authorized the issuance (rather
than whoever is operating the DNS for that domain) is not possible in
practice for DV. Going overboard with such a requirement carries the risk

The only other thing the BRs could ask for is that a subscriber (which
would be Cloudflare in this case) has to include language regarding
certificate issuance in their ToS if they act on behalf of other domain
registrants. However, given that the goal is to avoid surprising the
domain registrant, adding yet another section to a typical ToS document
is hardly going to change anything.

I don't think it's worth optimizing for the "I trust someone to host my
entire DNS zone and hold my DNSSEC keys (if you're into that kind of
thing) but TLS certificates? Boo!"-use-case.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Distrusting New WoSign and StartCom Certificates -- Mozilla Security Blog

2016-10-25 Thread Patrick Figel
On 26/10/16 01:27, Percy wrote:
> WoSign will roll out a globally trusted intermediate cert to sign new
> certs with the existing WoSign system that had so many control
> failures.
> 
> Does Mozilla and this community accept such a work-around for WoSign?
> If we do, then what's the point of distrust those WoSign root certs?
> If not, then what's an appropriate response for WoSign's
> announcement?

Has WoSign publicly stated that this will be an intermediate certificate
for which they hold the private key, or could this simply mean they'll
act as a (kind of) white-label reseller for some other CA until they've
completed the (re-)application process?

I don't think Mozilla should allow WoSign to use a new cross-signed
intermediate under their control until they've completed the application
process, but I don't see the problem if they plan to act as a reseller
for now to keep their business operational. If this is indeed Mozilla's
policy on this issue (and not just my opinion), it might be worth
thinking about communicating this to CAs to avoid trouble down the line.

Hopefully WoSign will be able to comment on this and clarify their plans.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign: updated report and discussion

2016-10-07 Thread Patrick Figel
On 07/10/16 13:23, Jakob Bohm wrote:
> On 07/10/2016 13:12, Gervase Markham wrote:
>> ... * WoSign agrees it should have been more forthcoming about its
>> purchase of StartCom, and announced it earlier.
>> 
>> * WoSign and StartCom are to be legally separated, with the
>> corporate structure changed such that Qihoo 360 owns them both
>> individually, rather than WoSign owning StartCom.
>> 
>> * There will be personnel changes:
>> 
>> - StartCom’s chairman will be Xiaosheng Tan (Chief Security
>> Officer of Qihoo 360). - StartCom’s CEO will be Inigo Barreira
>> (formerly GM of StartCom Europe). ... * StartCom will soon provide
>> a plan on how they will separate their operations and technology
>> from that of WoSign.
>> 
>> * In the light of these changes, Qihoo 360 request that WoSign and 
>> StartCom be considered separately.
>> 
>> 
>> Mozilla is minded to agree that it is reasonable to at least
>> consider the two companies separately, although that does not
>> preclude the possibility that we might decide to take the same
>> action for both of them. Accordingly, Mozilla continues to await
>> the full remediation plan from StartCom so as to have a full
>> picture. However, I think we can work towards a conclusion for
>> WoSign now.
>> 
> 
> As an outsider, here is one question: If StartCom has not yet
> decided on a technical separation plan, could one acceptable option
> for such a plan be to reactivate the old (pre-acquisition)
> infrastructure and software and take it from there?
> 
> An answer to that might help StartCom choose an acceptable plan.

I think a good approach for StartCom's remediation plan would be to
follow the conditions for readmission suggested by Mozilla:

> * A Point-In-Time Readiness Audit (PITRA) from a Mozilla-agreed
>   WebTrust auditor;
> * A full code security audit of their issuing infrastructure from a
>   Mozilla-chosen security auditor;
> * 100% embedded CT for all issued certificates, logged to at least
>   one Google and one non-Google log not controlled by WoSign/StartCom;
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Comodo issued a certificate for an extension

2016-10-02 Thread Patrick Figel
On 02/10/16 12:01, Jason Milionis wrote:
> Still no response from COMODO CA, that's interesting, but why?

They published an incident report a couple of days ago. For some reason,
it's not visible in the Google Groups archive of m.d.s.p (at least for
me). Here's an alternative link:

https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg04274.html
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-09-17 Thread Patrick Figel
On 17/09/16 16:38, Florian Weimer wrote:
> * Peter Bowen:
> 
>> On Sat, Sep 10, 2016 at 10:40 PM, Han Yuwei 
>> wrote:
>>> So when I delegated the DNS service to Cloudflare, Cloudflare 
>>> have the privilege to issue the certificate by default? Can I 
>>> understand like that?
>> 
>> I would guess that they have a clause in their terms of service or 
>> customer agreement that says they can update records in the DNS 
>> zone and/or calls out that the subscriber consents to them getting
>> a certificate for any domain name hosted on CloudFlare DNS.
> 
> I find it difficult to believe that the policies permit Cloudflare's 
> behavior, but are expected to prevent the issue of interception 
> certificates.  Aren't they rather similar, structurally?

I don't see how they're similar. Interception certificates are issued
without the knowledge and permission of the domain owner. Someone
signing up for CloudFlare willingly chooses to trust a CDN provider with
all their web traffic and DNS (in order to enable CloudFlare for a
domain, the NS record for that domain needs to point to CloudFlare.)

I could understand this argument if they'd somehow pretend to be a
DNS-only provider and then abuse that to issue certificates. However,
nothing about their site (or their marketing approach in general) gives
me that impression - it's made quite clear that they're primarily a CDN
with SSL support.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign Issue L and port 8080

2016-09-11 Thread Patrick Figel
On 11/09/16 22:05, Lee wrote:
>> In order to spoof a CA's domain validation request, an attacker
>> would need to be in a position to MitM the connection between the
>> CA and the targeted domain.
> 
> does dns hijacking or dns cache poisoning count as mitm?

I was mentioning this in order to demonstrate that opportunistic
encryption (try HTTPS, if that fails, fall back to HTTP) does not help
with this threat model. The specifics of how a MitM attack against the
CA is being pulled off is not all that important.

>> Option 2 is problematic because not all CAs log to CT at the
>> moment.
> 
> Why is that allowed?

Because CT is relatively new. In fact, I don't think Mozilla is shipping
a working CT implementation yet. Some might even question whether it's
fair to ask CAs to implement CT logging when the majority of browser
vendors haven't bothered yet (or have just recently begun to bother.)

>> Both options do nothing to solve the problem of a domain owner
>> losing the private key of their certificate (for example due to a
>> hack, data loss, or just a domain transfer).
>> 
>> You might be thinking of an option 3 - just connect to port 443,
>> see if the domain has a valid certificate, and use HTTPS if
>> available. This sounds great in theory, but since the attacker
>> would need to be able to MitM the connection in the first place in
>> order to spoof the validation request, they could simply intercept
>> this request and force validation on port 80.
>> 
>> All in all I think this would do more harm than good. Adding
>> complexity to the DV process means slower HTTPS adoption in
>> general. I'd rather see a "good enough" DV process ...
> 
> if it isn't obvious by now, I'd say that any process that doesn't 
> include continuous monitoring isn't "good enough"

If you're arguing in favor of mandatory CT logging for CAs, I'm with you
- I just don't think it's going to happen immediately. I think that's a
conversation that should be separate from the question of whether
encryption should be part of the domain validation process.

>> ...  and HTTPS everywhere when the alternative is a
>> perfect-in-theory DV process where HTTPS is available only for
>> sites that can deploy all these things competently.
> 
> If the site admins aren't competent they're going to get pwned, so
> why do I care if they're doing https instead of http?  Or look at it
> from a different angle - if it's that hard for sites to do it
> correctly then [Mozilla?  CAs? somebody] can come up with a checklist
> of what to look for in a hosting provider that does do it right.  It
> seems like most everybody is moving to "the cloud" anyway, so
> requiring site admins to be competent doesn't seem all that onerous a
> requirement.

I'm not worried about incompetent admins that get owned, I'm worried
about admins taking a look at the domain validation process you're
suggesting, realizing that they now need to deploy DNSSEC or that they
might brick their domain if they lose their private key because they
suddenly can't get another certificate without having a valid
certificate, and then just figuring that sticking with HTTP actually
doesn't sound that bad.

(Not to be snarky, but this argument sounds a bit like "So what? Mozilla
can just solve web security for everyone, and then we can have safe CAs!")

> Where is the opposition to DNSSEC?  I was going to say that I'm also 
> lurking on the dns ops mailing list, but I don't think I can call
> what I'm doing on m.d.s.p now lurking :)
> 
> Yes, DNSSEC is complicated & difficult to do right, but opposition
> to DNSSEC in general?  I'm not seeing it & any CA that can't or won't
> do DNSSEC shouldn't be in the Mozilla root store.

I've found [1] to be a good summary of arguments against DNSSEC.

[1]: http://sockpuppet.org/blog/2015/01/15/against-dnssec/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign Issue L and port 8080

2016-09-11 Thread Patrick Figel
On 10/09/16 22:37, Lee wrote:
> Right - I figured that out about 30 seconds after reading an email 
> about allowing verification on ports 80 and 443.  But you only need 
> to get the initial certificate one time - after that you should be 
> able to renew using port 443 and I didn't see anything in the 
> requirements about checking via an encrypted connection first.  Did I
> miss something or is getting a renewal cert over port 80 allowed?

In order to spoof a CA's domain validation request, an attacker would
need to be in a position to MitM the connection between the CA and the
targeted domain. This is where (the authentication part of) TLS would
come in handy. That leaves us with the problem of determining whether
the domain name in question should be considered to support TLS:

 1. The CA could look at prior records for that domain - if a
certificate has been issued before, treat it as a renewal.
 2. The CA could similarly search Certificate Transparency logs and
treat the issuance as a renewal if a certificate is found.

Option 1 has one big problem: The attacker only has to chose a CA that's
different from the CA the domain has used before.

Option 2 is problematic because not all CAs log to CT at the moment.

Both options do nothing to solve the problem of a domain owner losing
the private key of their certificate (for example due to a hack, data
loss, or just a domain transfer).

You might be thinking of an option 3 - just connect to port 443, see if
the domain has a valid certificate, and use HTTPS if available. This
sounds great in theory, but since the attacker would need to be able to
MitM the connection in the first place in order to spoof the validation
request, they could simply intercept this request and force validation
on port 80.

All in all I think this would do more harm than good. Adding complexity
to the DV process means slower HTTPS adoption in general. I'd rather see
a "good enough" DV process and HTTPS everywhere when the alternative is
a perfect-in-theory DV process where HTTPS is available only for sites
that can deploy all these things competently. Even if we push for
encryption for this validation method, we still have DNS validation
without any encryption, and given the rate at which DNSSEC is deployed,
that's not going to change any time soon. (Not to mention that there's a
lot of opposition to DNSSEC in general.)

Patrick
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sanctions short of distrust

2016-09-02 Thread Patrick Figel
On 03/09/16 01:15, Matt Palmer wrote:
> On Fri, Sep 02, 2016 at 03:48:13PM -0700, John Nagle wrote:
>> On 09/02/2016 01:04 PM, Patrick Figel wrote:
>>> On 02/09/16 21:14, John Nagle wrote:
>>>> 2. For certs under this root cert, always check CA's
>>>> certificate transparency server.   Fail if not found.
>>> 
>>> To my knowledge, CT does not have any kind of online check 
>>> mechanism. SCTs can be embedded in the certificate (at the time
>>> of issuance), delivered as part of the TLS handshake or via OCSP 
>>> stapling.
>> 
>> You're supposed to be able to check if a cert is known by querying
>> an OCSP responder.   OCSP stapling is just a faster way to do
>> that.
> 
> OCSP stapling is also a *privacy preserving* way to do that (also
> more reliable, in addition to faster).  I'm not sure that essentially
> snooping (or at least having the ability to snoop) on the browsing
> habits of users who happen to connect to a website that uses the
> certificate of a poorly-trusted CA better serves the user community
> than just pulling the root.  I guess at least we're not training
> users to ignore security warnings this way, and since if Mozilla is
> running the OCSP responder (or similar) you're already trusting
> Mozilla not to snoop on your browsing...

In addition to these concerns, (and assuming Mozilla would even be
willing to go down that route), I'm not sure how reliable a
Mozilla-operated OCSP responder would be given that the majority of
users who visit sites that use WoSign are probably behind the GFW.

If the answer is somewhere between "unreliable" and "extremely slow",
you might just as well pull the root (just for the sake of this
argument), which would mostly inconvenience site operators (as opposed
to every single Firefox user).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sanctions short of distrust

2016-09-02 Thread Patrick Figel
On 02/09/16 21:14, John Nagle wrote:
> 2. For certs under this root cert, always check
>CA's certificate transparency server.   Fail
>   if not found.

To my knowledge, CT does not have any kind of online check mechanism.
SCTs can be embedded in the certificate (at the time of issuance),
delivered as part of the TLS handshake or via OCSP stapling.

In practice that means certificates will either have to be re-issued, or
website operators need to modify their server software and configuration
(not many sites currently deliver SCTs). In terms of real-world impact,
you probably could just as well pull the root completely.

I believe there are two possible solutions if CT enforcement is what the
community decides on:

 1. Enforce CT only after a certain date, after which WoSign will need
to embed qualified SCTs. This check can be bypassed if the CA
backdates certificates (which is problematic, given the history of
backdating certificates in this particular case.)

 2. Verify that the certificates either have a qualified SCT *or* are
explicitly white-listed as certificates that have been issued prior
to WoSign implementing CT. There are a number of possible
implementations for this (Google's Safe Browsing, etc.), but they'd
all require a non-trivial amount of development work.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incidents involving the CA WoSign

2016-08-29 Thread Patrick Figel
Richard,

the problem with this approach is that the *subscriber* might not be
authorized to make this decision for the parent domain. To go back to
the GitHub case, the "owner" of a github.io subdomain telling you that
they are authorized to own a certificate that covers github.io is
irrelevant, as they have never demonstrated ownership of that domain.

The right approach would be to revoke the affected certificates
immediately and inform your subscribers that they will need to re-issue
their certificates (while also verifying ownership of the root domain).

Here's another similar case - cloudapp.net, which belongs to Microsoft
Azure. I'm fairly certain this certificate was not authorized by Microsoft:

https://crt.sh/?id=2980

Thanks,

Patrick

On 29/08/16 11:30, Richard Wang wrote:
> Yes, we plan to revoke all after getting confirmation from
> subscriber. We are doing this.
> 
> Regards,
> 
> Richard
> 
>> On 29 Aug 2016, at 16:38, Gervase Markham 
>> wrote:
>> 
>>> On 29/08/16 05:46, Richard Wang wrote: For incident 1 -
>>> mis-issued certificate with un-validated subdomain, total 33
>>> certificates. We have posted to CT log server and listed in 
>>> crt.sh, here is the URL. Some certificates are revoked after
>>> getting report from subscriber, but some still valid, if any
>>> subscriber think it must be revoked and replaced new one, please
>>> contact us in the system, thanks.
>> 
>> Er, no. If these certificates were issued with unvalidated parent 
>> domains (e.g. with github.com when the person validation
>> foo.github.com) then they need to all be revoked. You should
>> actively contact your customers and issue them new certificates
>> containing only validated information, and then revoke these ones.
>> 
>> Gerv
>> 
>> 
>> ___ dev-security-policy
>> mailing list dev-security-policy@lists.mozilla.org 
>> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About the ACME Protocol

2016-07-20 Thread Patrick Figel
On 20/07/16 04:59, Peter Kurrasch wrote:
> Regarding the on-going development of the spec: I was thinking more 
> about the individual commits on github and less about the IETF 
> process. I presume that most commits will not get much scrutiny but
> a periodic (holistic?) review of the doc is expected to find and 
> resolve conflicts, etc. Is that a fair statement?

Yep, the GitHub repository is not what I would call the canonical source
of the "approved" draft produced by the working group. Implementers
should look at the published drafts (-01, -02, -03) and the final RFC
once it's released.

> The report on the security audit was interesting to read. It's good 
> to see someone even attempted it. In addition to the protocol itself 
> it would be interesting to see an analysis of an ACME server 
> (Boulder, I suppose). Maybe someone will do some pentesting at 
> least?

I'm having difficulties finding a source for this, but I seem to recall
that in addition to the WebTrust audit, ISRG hired an independent
infosec company to perform a pentest/review of boulder. FWIW, I don't
think this is a ACME/Let's Encrypt-specific concern, and I'd personally
be much more worried about the large number of other CAs whose CA
software is closed-source (and thus impossible for anyone to review).

> The 200 LOC is an interesting idea. I assume such an implementation 
> would rely heavily on external libraries for certain functions (e.g. 
> key generation‎, https handling, validating the TLS certificate chain
> provided by the server, etc.)? If so, does anyone anticipate that
> someone will develop a standalone, all-in-one (or mostly-in-one) 
> client? Is a client expected to do full cert chain validation 
> including revocation checks?

acme-tiny[1] would be an example of a client that comes in at just shy
of 200 LOC. Yes, it definitely makes use of other libraries such as
OpenSSL. I'm not exactly sure what you're referring to with chain
validation/revocation checks? Communication with the CA server is using
HTTPS and validating the certificates, if that's what you mean.

> In terms of an overly broad, overly general statement, the protocol 
> strikes me as being too new, too immature. There are gaps to be 
> filled, complexities to be distilled, and (unknown) problems to be 
> fixed. I doubt this comes as new information to anyone but I think 
> there's value in recognizing that the protocol has not had the 
> benefit of time for it to reach it's full potential.

The IETF process might be far from perfect (and certainly not what
anyone would call fast), but it's currently most likely the best and
most secure way for the internet to come up with new protocols. In the
context of publicly-trusted CAs, I personally doubt that any CA has put
in the same amount of effort for any of their internal or external APIs
for certificate issuance, and past examples show this to be true (see
the recent StartCom fiasco). In that context, I don't see why we should
allow CAs to continue using their own proprietary systems for issuance
while at the same time calling ACME too new and immature to be trusted
with the security of the Web PKI.

> The big, unaddressed (or insufficiently addressed) issue as I see
> it‎ is compatibility. This is likely to become a bigger problem
> should other CA's deploy ACME and as interdependencies grow over
> time. Plus, when vulnerabilities are found and resolved,
> compatibility problems become inevitable (the security audit results
> hint at this).
> 
> The versioning strategy of having CA's provide different URL's for 
> different versions of different clients might not scale well.‎ One 
> should not expect all cert applicants to have and use only the
> latest client software. This approach might work for now but it could
> easily become unmanageable. Picture, if you will, a CA that must
> support 20 different client versions and the headaches that can
> bring.

I think you're overestimating the number of incompatible API endpoints
ACME CAs will launch in the first place. There's a good chance this
won't happen at all for Let's Encrypt until the final RFC is released,
at which point we're looking at two endpoints to maintain. In the
meantime, backwards-compatible changes from newer drafts can continue to
be pulled into the current endpoint. Let's Encrypt has recently added
some documentation on this matter[2].

> [...] a separate document to discuss deployment details. A deployment
> doc could also be used to cover the pro's and con's of using one
> server to do both ACME and other Web sites and services. The chief
> concern is if a vulnerability in the web site can lead to remote code
> execution which can then impact handling on the ACME side of the
> fence. Just a thought.

There are a number of other documents that specify operational details
for publicly-trusted CAs, such as the Baseline or Network Security
Requirements. I certainly hope there's something in there that would
prevent CAs from hosting 

Re: About the ACME Protocol

2016-07-08 Thread Patrick Figel
Before getting into specifics, I should say that you're likely to get a
better answer to most of these question on the IETF ACME WG mailing list[1].

On 08/07/16 16:36, Peter Kurrasch wrote:
> I see on the gitub site for the draft that updates are frequently
> and continuously being made to the protocol spec (at least one a
> week, it appears). Is there any formalized process to review the
> updates? Is there any expectation for when a "stable" version might
> be achieved (by which I mean that further updates are unlikely)?‎

The IETF has a working group for ACME that's developing this protocol.
The IETF process is hard to describe in a couple of words (you can read
up on it on ietf.org if you're interested). Other related protocols such
as TLS are developed in a similar fashion.

> How are compatibility issues being addressed?

Boulder (the only ACME server implementation right now, AFAIK) plans to
tackle this by providing new endpoints (i.e. server URLs) whenever
backwards-incompatible changes are introduced in a new ACME draft, while
keeping the old endpoints available and backwards-compatible until the
client ecosystem catches up. I imagine once ACME becomes an internet
standard, future changes will be kept backwards-compatible (i.e.
"extensions" of some sort), but that's just me guessing.

> Has any consideration been given to possible saboteurs who might like
> to introduce backdoors?

The IETF process is public, which makes this harder (though not
impossible) to pull off. A number of people have reviewed and audited
the protocol (including a formal model[2]).

> I personally don't see the wisdom in having the server
> implementation details‎ in what is ostensibly a protocol
> specification.

Which part of the specification mentions implementation details?

> Will there be any sort of audit to establish compliance between a 
> particular sever implementation and this Internet-Draft?

Someone could definitely build tools to check compliance, but who would
enforce this, and what happens to a server/client that's not compliant?

> Will the client software be able to determine the version of the 
> specification under which the server is operating? (I apologize if it
> is in the spec; I didn't do a detailed reading of it.) On the client
> side, is there a document describing the details of an ideal 
> implementation? Does the client inform the server to which version of
> the protocol it is adhering--for example, in a user-agent string 
> (again, I didn't notice one). Is there any test to validate the 
> compliance of a client with a particular version of the 
> Internet-Draft?

See the previous paragraph on compatibility: Server URLs can be
considered backwards-compatible; there's currently no protocol version
negotiation or something like that.

> One thought for consideration is the idea of a saboteur who seeks to 
> compromise the client software.‎ This is of particular concern if the
> client software can also generate the key pair since there are the
> obvious benefits to bad actors if certain sites are using a weaker
> key. Just as Firefox is a target for malware, the developers of
> client-side software should be cognizant of bad actors who might seek
> to compromise their software.

That's certainly something to keep in mind, but not something that can
be solved by the protocol. It's also not specific to ACME clients, the
same concern applies to any software that touches keys in the course of
normal operation. FWIW, functional ACME client implementations can be
written in < 200 LOC, which would be relatively easy to review, and a
client would not necessarily need access to the private key of your
certificate - a CSR would be sufficient.

[1]: https://www.ietf.org/mailman/listinfo/acme
[2]: https://mailarchive.ietf.org/arch/msg/acme/9HX2i0oGyIPuE-nZYAkTTYXhqnk
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-07-08 Thread Patrick Figel
On 08/07/16 08:04, Peter Gutmann wrote:
> Or is it that ACME is just a desperate attempt to derail StartCom's
> StartEncrypt at any cost?

That doesn't make any sense - ACME has been in production for close to a
year, while StartAPI was launched this April (and StartEncrypt just a
couple of weeks ago).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-07-01 Thread Patrick Figel
On Friday, July 1, 2016 at 9:35:20 AM UTC+2, Eddy Nigg wrote:
> So far less than three hundred certificates have been issued using
> this method, none should have been effectively issue wrongfully due
> to our backend checks.

Can you comment on how your backend checks would have prevented any
misissuance? My understanding of the report is that this was not so much
an issue with the client software, but rather an oversight in the
protocol that allows Domain Validation checks that are not sufficient in
assuring domain ownership, thus the issue was very much a backend issue.
I assume there are reasonable controls in place to prevent misissuance
for high-risk domains, but what about other domains? Would they have
been affected by this?

I would also be curious about why the certificate has not been logged to
CT, given StartCom's prior statements with regards to CT adoption.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy