Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-29 Thread Ryan Sleevi via dev-security-policy
On Wed, Mar 29, 2017 at 7:30 AM, Hector Martin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> We actually have *five* levels of trust here:
>
> 1. HTTP
> 2. HTTPS with no validation (self-signed or anonymous ciphersuite)
> 3. HTTPS with DV
> 4. HTTPS with OV
> 5. HTTPS with EV
>

No, we actually only have three levels.

1. HTTP
2. "I explicitly asked for security and didn't get it" (HTTPS with no
validation)
3. HTTPS

Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5)
> full EV banner. (a) still correlates reasonably well with (4) and (5).
> HTTPS is no longer optional. All those phishing sites get a neutral URL
> bar. We've already educated users that their bank needs a green lock in the
> URL.


And that was a mistake - one which has been known since the very
introduction of EV in the academic community, but sadly, like Cassandra,
was not heeded.

http://www.adambarth.com/papers/2008/jackson-barth-b.pdf should be required
reading for anyone who believes OV or EV objectively improves security,
because it explains how since the very beginning of browsers support for
SSL/TLS (~1995), there's been a security policy at place that determines
equivalence - the Same Origin Policy.

While the proponents of SSL/TLS then - and now - want certificates to be
Something More, the reality has been that, from the get-go, the only
boundary has been the Origin.

I think the general community here would agree that making HTTPS simple and
ubiquitous is the goal, and efforts by CAs - commercial and non-commercial
- towards those efforts, whether it be through making certificates more
affordable to obtain or simpler to install or easier to support - are
well-deserving of praise.

But if folks want OV/EV, then they also have to accept there needs to be an
origin boundary, like Barth/Jackson originally called for in 2008
(httpsev://), and that any downtrust in that boundary needs to be blocked
(similar to mixed content blocking of https -> http, as those degrade the
effective assurance). Further, it seems as if it would be necessary to
obtain the goals of 4, 5, or (a) that the boundary be 'not just'
httpsev://, but somehow bound to the organization itself - an
origin-per-organization, if you will.

And that, at its core, is fundamentally opposed to how the Web was supposed
to and does work. Which is why (4), (5), and (a) are unreasonable and
unrealistic goals, despite having been around for over 20 years, and no new
solutions have been put forward since Barth/Jackson called out the obvious
one nearly a decade ago, which no one was interested in.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Researcher Says API Flaw Exposed Symantec Certificates, Including Private Keys

2017-03-29 Thread Ryan Sleevi via dev-security-policy
https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.4.2.pdf

Section 6.1.2

On Wed, Mar 29, 2017 at 3:22 AM, okaphone.elektronika--- via
dev-security-policy  wrote:

> Weird.
>
> I expect there are no requirements for a CA to keep other people's private
> keys safe. After all handling those is definitely not part of being a CA.
> ;-)
>
> CU Hans
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-29 Thread mono.riot--- via dev-security-policy
> Not for those sorts of differences. There are in an IDN context:
> http://unicode.org/reports/tr39/

wasn't aware of that TS, thanks!
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT - BR Self Assessments

2017-03-29 Thread Kathleen Wilson via dev-security-policy
On Wednesday, March 29, 2017 at 2:00:05 PM UTC-7, Jeremy Rowley wrote:
> ...
> An extension on this could be to have CAs annually file an updated mapping
> with their WebTrust audit. That way it's a reminder that the CA needs to
> notify Mozilla of changes in their process and keeps the CAs thinking about
> updating practices to stay in-line with  the baseline requirements. Plus, a
> practice like that would provide better notice to the public on CA policy
> changes and how CAs are responding to new threats.
> 

Oh! I like that idea!

The timing is good, as we are just now switching over to the new annual process:
https://wiki.mozilla.org/CA:CommonCADatabase#How_To_Provide_Annual_Updates

I could also say something about it in the CA Communication we are getting 
ready to send.

Does anyone see a reason why we should *not* require a new BR-self-assessment 
annually from every CA with the Websites trust bit enabled?

I think CAs could just attach it to their original root inclusion bug each year.

Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Jakob Bohm via dev-security-policy

On 29/03/2017 20:52, Alex Gaynor wrote:

I don't think it's a good idea to design our system around the idea of
"What would a user be looking for if they read the cert chain manually".
For example, in the US, if such a government agency chose to use a
Government CA (as a user might reasonably expect!), their users would all
get cert warnings with the Mozilla trust DB!

Even as someone pretty well versed in the WebPKI, I don't think my
expectations about who the CA for a given site should be really are
actionable.

It seems to be that certs are for computers to consume, and only
incidentally for humans to read (*hit tip* to SICP).

Alex

PS: To expand a bit on this thought experiment, if I were to enumerate all
CAs over a bunch of websites, the only cases I can think of where human
intuition has a defensible conclusion, is that certain CAs _shouldn't_ sign
things, notably CAs intended only for limited usage (e.g. a Government CA
designed for signing government website certs). These cases are, I think,
much better handled by Name Constraints (or some other technical
constraint), and that's an entirely different subject altogether.



Which was precisely my point, except that to date, the few implemented
forms of name constraints seem unable to capture the real world
considerations that should exclude most CAs from a considerable part of
the Web name space:

- Country-specific CAs want to sign certificates for GTLD sites (.com,
 .net, .org, .name etc.) that are actually under that country
 jurisdiction.
- Cloud-hosting provider CAs (Microsoft, Google, Amazon) want to sign
 certificates for anything they host, regardless of TLD or country.

- Neither are appropriate CAs for any sites not under their
 administration/jurisdiction.

The special case of the old US Gov CA getting thrown out of Mozilla and
some other browsers is something of an outlier, but even then, it would
be odd if a US Gov site had a certificate from the Taiwan GRCA or the
Spanish guild of public notaries.

So until relevant technical constraints are actually ubiquitous in the
WebPKI, manual checking remains relevant.


On Wed, Mar 29, 2017 at 2:42 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 29/03/2017 16:47, Gervase Markham wrote:


On 29/03/17 15:35, Peter Kurrasch wrote:


In other words, what used to be a trust anchor is now no better at
establishing trust than the end-entity cert one is trying to validate or
investigate (for example, in a forensic context) in the first place. I
hardly think this redefinition of trust anchor improves the state of the
global PKI and I sincerely hope it does not become a trend.



The trouble is, you want to optimise the system for people who make
individual personal trust decisions about individual roots. We would
like to optimise it for ubiquitous minimum-DV encryption, which requires
mechanisms permitting new market entrants on a timescale less than 5+
years.



That goal would be equally (in fact better) served by new market
entrants getting cross-signed by incumbents, like Let's encrypt did.

Problem is that whenever viewing a full cert chain in a typical browser
etc. that has reference "trusted" copies of all the incumbents,
disassociating the names in root CA and SubCA certificates from reality
creates misinformation in a context displayed as being "fully
validated" to known traceable roots by the Browser/etc.

For example, when doing ordinary browsing with https on-by-default,
users rarely bother checking the certificate beyond "the browser says
it is not a MitM attack, good".  Except when visiting a high value
site, such as a government site to file a change in ownership of an
entire house (such sites DO exist).  Then it makes sense to click on
the certificate user interface and check that the supposed "Government
Land Ownership Registry of the Kingdom of X" site is verified by
someone that could reasonably be trusted to do so (i.e. not a national
CA of the republic of Y or the semi-internal CA of some private
megacorp).

With this recent transaction, the browser could show "GlobalSign" when
it should show "Google", two companies with very different security and
privacy reputations.  One would expect a blogger/blogblog domain to
have a Google-issued certificate while one would expect the opposite of
anything hosted outside the Alphabet group.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DRAFT - BR Self Assessments

2017-03-29 Thread Jeremy Rowley via dev-security-policy
Hi Kathleen, 

This is a good idea, and I like the phased-in approach. The mapping exercise
is similar to how other communities evaluate inclusion requests and makes it
more apparent how the CA is complying with the various Mozilla requirements.
An extension on this could be to have CAs annually file an updated mapping
with their WebTrust audit. That way it's a reminder that the CA needs to
notify Mozilla of changes in their process and keeps the CAs thinking about
updating practices to stay in-line with  the baseline requirements. Plus, a
practice like that would provide better notice to the public on CA policy
changes and how CAs are responding to new threats.

Jeremy

-Original Message-
From: dev-security-policy
[mailto:dev-security-policy-bounces+jeremy.rowley=digicert.com@lists.mozilla
.org] On Behalf Of Kathleen Wilson via dev-security-policy
Sent: Wednesday, March 29, 2017 11:55 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: DRAFT - BR Self Assessments

All,

As mentioned in the GDCA discussion[1], I would like to add a step to
Mozilla's CA Inclusion/Update Request Process[2] in which the CA performs a
self-assessment about their compliance with the CA/Browser Forum's Baseline
Requirements.

A draft of this new step is here:
https://wiki.mozilla.org/CA:BRs-Self-Assessment

It includes a link to a template for CA's BR Self Assessment, which is a
Google Doc:
https://docs.google.com/spreadsheets/d/1ni41Czial_mggcax8GuCBlInCt1mNOsqbEPz
ftuAuNQ/edit?usp=sharing

Here's how I am considering introducing this new step. Of course, this only
applies to CAs who are requesting the Websites trust bit.

+ For the CAs currently in the queue for discussion, I would ask them to
perform this BR Self Assessment before I would start their discussion.

+ For CAs currently in the Information Verification phase, I would ask them
to perform this BR Self Assessment before we would continue with Information
Verification.

+ For new requests, we would have the BR Self Assessment be the very first
step.


I would greatly appreciate your feedback on adding this step to the root
inclusion/update process, the wiki page draft, and the template.


Thanks,
Kathleen

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/kB2JrygK7Vk/Kk7L
e2F7CQAJ
[2] https://wiki.mozilla.org/CA

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Alex Gaynor via dev-security-policy
I don't think it's a good idea to design our system around the idea of
"What would a user be looking for if they read the cert chain manually".
For example, in the US, if such a government agency chose to use a
Government CA (as a user might reasonably expect!), their users would all
get cert warnings with the Mozilla trust DB!

Even as someone pretty well versed in the WebPKI, I don't think my
expectations about who the CA for a given site should be really are
actionable.

It seems to be that certs are for computers to consume, and only
incidentally for humans to read (*hit tip* to SICP).

Alex

PS: To expand a bit on this thought experiment, if I were to enumerate all
CAs over a bunch of websites, the only cases I can think of where human
intuition has a defensible conclusion, is that certain CAs _shouldn't_ sign
things, notably CAs intended only for limited usage (e.g. a Government CA
designed for signing government website certs). These cases are, I think,
much better handled by Name Constraints (or some other technical
constraint), and that's an entirely different subject altogether.

On Wed, Mar 29, 2017 at 2:42 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 29/03/2017 16:47, Gervase Markham wrote:
>
>> On 29/03/17 15:35, Peter Kurrasch wrote:
>>
>>> In other words, what used to be a trust anchor is now no better at
>>> establishing trust than the end-entity cert one is trying to validate or
>>> investigate (for example, in a forensic context) in the first place. I
>>> hardly think this redefinition of trust anchor improves the state of the
>>> global PKI and I sincerely hope it does not become a trend.
>>>
>>
>> The trouble is, you want to optimise the system for people who make
>> individual personal trust decisions about individual roots. We would
>> like to optimise it for ubiquitous minimum-DV encryption, which requires
>> mechanisms permitting new market entrants on a timescale less than 5+
>> years.
>>
>>
> That goal would be equally (in fact better) served by new market
> entrants getting cross-signed by incumbents, like Let's encrypt did.
>
> Problem is that whenever viewing a full cert chain in a typical browser
> etc. that has reference "trusted" copies of all the incumbents,
> disassociating the names in root CA and SubCA certificates from reality
> creates misinformation in a context displayed as being "fully
> validated" to known traceable roots by the Browser/etc.
>
> For example, when doing ordinary browsing with https on-by-default,
> users rarely bother checking the certificate beyond "the browser says
> it is not a MitM attack, good".  Except when visiting a high value
> site, such as a government site to file a change in ownership of an
> entire house (such sites DO exist).  Then it makes sense to click on
> the certificate user interface and check that the supposed "Government
> Land Ownership Registry of the Kingdom of X" site is verified by
> someone that could reasonably be trusted to do so (i.e. not a national
> CA of the republic of Y or the semi-internal CA of some private
> megacorp).
>
> With this recent transaction, the browser could show "GlobalSign" when
> it should show "Google", two companies with very different security and
> privacy reputations.  One would expect a blogger/blogblog domain to
> have a Google-issued certificate while one would expect the opposite of
> anything hosted outside the Alphabet group.
>
>
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Peter Kurrasch via dev-security-policy
I'm not so sure I want to optimize the system in that way, but I am concerned 
about the (un)intended consequences of rapidly changing root ownership on the 
global PKI.

It's not inconsequential for Google to say: "From now on, nobody can trust what 
you see in the root certificate, even if some of it appears in the browser UI. 
The only way you can actually establish trust is to do frequent, possibly 
complicated research." It doesn't seem right that Google be allowed to 
unilaterally impose that change on the global PKI without any discussion from 
the security community.

But you bring up a good point that there seems to be much interest of late to 
speed up the cycle times for various activities within the global PKI but it's 
not entirely clear to me what's driving it. My impression is that Google was 
keen to become a CA in their own right as quickly as possible, so is this 
interest based on what Google wants? Or is there a Mozilla mandate that I 
haven't seen (or someone else's mandate?)?


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Wednesday, March 29, 2017 9:48 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Re: Criticism of Google Re: Google Trust Services roots

On 29/03/17 15:35, Peter Kurrasch wrote:
> In other words, what used to be a trust anchor is now no better at
> establishing trust than the end-entity cert one is trying to validate or
> investigate (for example, in a forensic context) in the first place. I
> hardly think this redefinition of trust anchor improves the state of the
> global PKI and I sincerely hope it does not become a trend.

The trouble is, you want to optimise the system for people who make
individual personal trust decisions about individual roots. We would
like to optimise it for ubiquitous minimum-DV encryption, which requires
mechanisms permitting new market entrants on a timescale less than 5+ years.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Jakob Bohm via dev-security-policy

On 29/03/2017 16:47, Gervase Markham wrote:

On 29/03/17 15:35, Peter Kurrasch wrote:

In other words, what used to be a trust anchor is now no better at
establishing trust than the end-entity cert one is trying to validate or
investigate (for example, in a forensic context) in the first place. I
hardly think this redefinition of trust anchor improves the state of the
global PKI and I sincerely hope it does not become a trend.


The trouble is, you want to optimise the system for people who make
individual personal trust decisions about individual roots. We would
like to optimise it for ubiquitous minimum-DV encryption, which requires
mechanisms permitting new market entrants on a timescale less than 5+ years.



That goal would be equally (in fact better) served by new market
entrants getting cross-signed by incumbents, like Let's encrypt did.

Problem is that whenever viewing a full cert chain in a typical browser
etc. that has reference "trusted" copies of all the incumbents,
disassociating the names in root CA and SubCA certificates from reality
creates misinformation in a context displayed as being "fully
validated" to known traceable roots by the Browser/etc.

For example, when doing ordinary browsing with https on-by-default,
users rarely bother checking the certificate beyond "the browser says
it is not a MitM attack, good".  Except when visiting a high value
site, such as a government site to file a change in ownership of an
entire house (such sites DO exist).  Then it makes sense to click on
the certificate user interface and check that the supposed "Government
Land Ownership Registry of the Kingdom of X" site is verified by
someone that could reasonably be trusted to do so (i.e. not a national
CA of the republic of Y or the semi-internal CA of some private
megacorp).

With this recent transaction, the browser could show "GlobalSign" when
it should show "Google", two companies with very different security and
privacy reputations.  One would expect a blogger/blogblog domain to
have a Google-issued certificate while one would expect the opposite of
anything hosted outside the Alphabet group.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


DRAFT - BR Self Assessments

2017-03-29 Thread Kathleen Wilson via dev-security-policy
All,

As mentioned in the GDCA discussion[1], I would like to add a step to Mozilla's 
CA Inclusion/Update Request Process[2] in which the CA performs a 
self-assessment about their compliance with the CA/Browser Forum's Baseline 
Requirements.

A draft of this new step is here:
https://wiki.mozilla.org/CA:BRs-Self-Assessment

It includes a link to a template for CA's BR Self Assessment, which is a Google 
Doc:
https://docs.google.com/spreadsheets/d/1ni41Czial_mggcax8GuCBlInCt1mNOsqbEPzftuAuNQ/edit?usp=sharing

Here's how I am considering introducing this new step. Of course, this only 
applies to CAs who are requesting the Websites trust bit.

+ For the CAs currently in the queue for discussion, I would ask them to 
perform this BR Self Assessment before I would start their discussion.

+ For CAs currently in the Information Verification phase, I would ask them to 
perform this BR Self Assessment before we would continue with Information 
Verification.

+ For new requests, we would have the BR Self Assessment be the very first step.


I would greatly appreciate your feedback on adding this step to the root 
inclusion/update process, the wiki page draft, and the template.


Thanks,
Kathleen

[1] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/kB2JrygK7Vk/Kk7Le2F7CQAJ
[2] https://wiki.mozilla.org/CA

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Guang Dong Certificate Authority (GDCA) root inclusion request

2017-03-29 Thread Kathleen Wilson via dev-security-policy
All,

This request is to include the "GDCA TrustAUTH R5 ROOT" certificate, turn on 
the Websites trust bit, and enabled EV treatment.

In order to help get this discussion moving again, I asked GDCA to provide a 
side-by-side comparison of the latest version of the BRs with their CP/CPS 
documents.

They provided this BR-self-assessment here:
https://bugzilla.mozilla.org/attachment.cgi?id=8851230

The documents that were evaluated in this self-assessment are available on the 
CA's website. 

All of these documents contain both the original text in Chinese, and the 
translation into English.

Document Repository: 
https://www.gdca.com.cn/customer_service/knowledge_universe/cp_cps/

GDCA CP v1.5:
https://www.gdca.com.cn/customer_service/knowledge_universe/cp_cps/CPCPS-GDCA1.5GDCA-CP-V1.5/

GDCA CPS v4.4:
https://www.gdca.com.cn/customer_service/knowledge_universe/cp_cps/CPCPS-GDCA4.4GDCA-CPS-V4.4/

GDCA EV CP v1.3:
https://www.gdca.com.cn/customer_service/knowledge_universe/cp_cps/CPCPS-GDCA-EV1.3GDCA-EV-CP-V1.3/

GDCA EV CPS v1.4:
https://www.gdca.com.cn/customer_service/knowledge_universe/cp_cps/CPCPS-GDCA-EV1.4GDCA-EV-CPS-V1.4/

I will greatly appreciate it if you all would take another look at this CA's 
request, review their self-assessment, and respond in this discussion to let me 
know if you believe that this CA has addressed all of your questions or 
concerns. 

Also, I would like to make this BR-self-assessment a standard part of Mozilla's 
root inclusion/change process. I will draft what that will look like, and start 
a separate discussion about it. 

Thanks,
Kathleen
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Gervase Markham via dev-security-policy
On 29/03/17 15:35, Peter Kurrasch wrote:
> In other words, what used to be a trust anchor is now no better at
> establishing trust than the end-entity cert one is trying to validate or
> investigate (for example, in a forensic context) in the first place. I
> hardly think this redefinition of trust anchor improves the state of the
> global PKI and I sincerely hope it does not become a trend.

The trouble is, you want to optimise the system for people who make
individual personal trust decisions about individual roots. We would
like to optimise it for ubiquitous minimum-DV encryption, which requires
mechanisms permitting new market entrants on a timescale less than 5+ years.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Researcher Says API Flaw Exposed Symantec Certificates, Including Private Keys

2017-03-29 Thread Florian Weimer via dev-security-policy
* Nick Lamb via dev-security-policy:

> In order for Symantec to reveal anybody's private keys they'd first
> need to have those keys, which is already, IIRC forbidden in the
> BRs.

I think this requirement was dropped because it makes it unnecessarily
difficult to report key compromises.  There used to be a time when CAs
demanded zero-knowledge proofs of key compromise (which can be
surprisingly hard to do with existing tools).  Fortunately, these
times are over, and CAs no longer categorically reject the submission
of compromised subscriber keys (although my sample is really small due
to my limited factorization capabilities).
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

2017-03-29 Thread Hector Martin via dev-security-policy

On 28/03/17 08:23, Peter Gutmann via dev-security-policy wrote:

Martin Heaps via dev-security-policy  
writes:


This topic is frustrating in that there seems to be a wide attempt by people
to use one form of authentication (DV TLS) to verify another form of
authentication (EV TLS).


The overall problem is that browser vendors have decreed that you can't have
encryption unless you have a certificate, i.e. a CA-supplied magic token to
turn the crypto on.  Let's Encrypt was an attempt to kludge around this by
giving everyone one of these magic tokens.  Like a lot of other kludges, it
had negative consequences...


It's not a kludge, though. Let's Encrypt is not (merely) a workaround 
for the fact that self-signed certificates are basically considered 
worthless. If it were, it wouldn't meet BR rules. Let's Encrypt actively 
performs validation of domains, and in that respect is as legitimate as 
any other DV CA.


We actually have *five* levels of trust here:

1. HTTP
2. HTTPS with no validation (self-signed or anonymous ciphersuite)
3. HTTPS with DV
4. HTTPS with OV
5. HTTPS with EV

These are technically objective levels of trust (mostly). There is also 
a technically subjective tangential attribute:


a. Is not a phishing or malicious site.

Let's Encrypt aims to obsolete levels 1 and 2 by making 3 ubiquitously 
accessible.


The problem is that browser vendors have historically treated trust as 
binary, confounding (3), (4), and (a), mostly because the ecosystem at 
the time made it hard to get (3) without meeting (a). They also 
inexplicably treated (2) as worse than (1), which is of course nonsense, 
but I guess was driven by some sort of backwards thinking that "if you 
have security at all, you'd better have good security" (or, 
equivalently: "normal people don't need security, and a mediocre attempt 
at security implies Bad Evil Things Are Happening").


With time, certificates have become more accessible, everyone has come 
to agree that we all need security, and with that, that thinking has 
become obsolete. Getting a DV cert for a phishing site was by no means 
hard before Let's Encrypt. Now that Let's Encrypt is here, it's trivial.



So it's now being actively exploited... how could anyone *not* see this
coming?  How can anyone actually be surprised that this is now happening?  As
the late Bob Jueneman once said on the PKIX list (over a different PKI-related
topic), "it's like watching a train wreck in slow motion, one freeze-frame at
a time".  It's pre-ordained what's going to happen, the most you can do is
artificially delay its arrival.


And this question should be directed at browser vendors. After years of 
mistakenly educating users that "green lock = good, safe, secure, 
awesome, please type in all your passwords", how could they *not* see 
this coming?



The end nessecity is that the general public need to be educated [...]


Quoting Vesselin Bontchev, "if user education was going to work, it would have
worked by now".  And that was a decade ago.


This is strictly a presentation layer problem. We *know* what the 
various trust levels mean. We need to present them in a way that is 
*useful* to users.


Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5) 
full EV banner. (a) still correlates reasonably well with (4) and (5). 
HTTPS is no longer optional. All those phishing sites get a neutral URL 
bar. We've already educated users that their bank needs a green lock in 
the URL.


--
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Researcher Says API Flaw Exposed Symantec Certificates, Including Private Keys

2017-03-29 Thread okaphone.elektronika--- via dev-security-policy
Weird.

I expect there are no requirements for a CA to keep other people's private keys 
safe. After all handling those is definitely not part of being a CA. ;-)

CU Hans
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy