Re: On the value of EV

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 11:09:44 PM UTC-6, Matt Palmer wrote:

> 
> Before that, though, a quick word from our sponsor, Elephant-Be-Gone Amulets
> of America, Inc.  No elephants in America, you say?  See, they're 100%
> effective!  Get yours today!

Of relevance on this point, I'm quite sure there are several elephants a few 
miles away at the Birmingham Zoo.  I'll grant you, it was a close thing, but 
after yesterday's election results, I believe the rest of the country has 
decided to let Alabama remain in America.  For now.  It would appear that the 
Amulets' influence is limited.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Matt Palmer via dev-security-policy
On Thu, Dec 14, 2017 at 12:21:12AM +, Tim Hollebeek via dev-security-policy 
wrote:
> If you look at the phishing data feeds and correlate them with EV 
> certificates,
> you'll find out that Tim's "speculation" is right.

Ladies and gentlemen, this evening, for your viewing pleasure, the musical
number will be performed by The "Correlation is not Causation" Choir.

Before that, though, a quick word from our sponsor, Elephant-Be-Gone Amulets
of America, Inc.  No elephants in America, you say?  See, they're 100%
effective!  Get yours today!

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
Of course not - facetious or not, it’s similarly logically and empirically
flawed.

On Wed, Dec 13, 2017 at 7:29 PM Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I don't want to spend too much time digressing into a discussion of the
> same
> origin policy as a basis for a reasonable security model for the web, but I
> hope we could all agree on one thing that was abundantly obvious twenty
> years ago, and has only become more obvious:
>
> Anything originally introduced by Netscape is horribly broken and needs to
> be replaced.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy [mailto:dev-security-policy-
> > bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> > Matthew Hardeman via dev-security-policy
> > Sent: Wednesday, December 13, 2017 2:41 PM
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: On the value of EV
> >
> > On Tuesday, December 12, 2017 at 3:52:40 PM UTC-6, Ryan Sleevi wrote:
> >
> > > Yes. This is the foundation and limit of Web Security.
> > >
> > >
> > https://clicktime.symantec.com/a/1/GrbZLkNqUS91rgzMay4M15oOr3bYABO
> > Whq1
> > > K3U87RIo=?d=pHiUFZpus7xBKMLSCUAfZRndcniHFdqZrXgc-
> > _r0FxYSwiMHScu8QgSvJy
> > > E8LSHlko0v84eVoyDMoTZTqKVUvrQ_LxFgoZAq1f-
> > Iw1ESfQHF0h4v_K1IjkBwaIhjNiNX
> > > coOSGp7NnMokKR3ug1bd6esHHwnMamBgCwow-ecE3suQ9uS4-
> > zfp_NLR0LWp-kXGqFhQqR
> > > AfcAImdNz09yApHBItSOYOep3BWfyNMoDnHxlSQJaFx3zhDxV3a-
> > AkndjySZN86maZVN5c
> > > DBfq3b_73V2qS22vAabmGLFF5uZN8g8Lxstv8tiVTx9_BPzKFZVzWHsrnnheL-
> > W3D22riT
> > > AFkvNYWYFwJ1fHe0NpVNxMU3y4vi7I9_zIoxa24Fox-
> > VmvQlMPLAbZZwHNAumWKMqIhjrt
> > >
> > k76Lk7EkqLehoiC9__j0qne7lDkDd47_=https%3A%2F%2Fen.wikipedia.org%
> > 2Fwi
> > > ki%2FSame-origin_policy
> > >
> > > This is what is programatically enforced. Anything else either
> > > requires new technology to technically enforce it (such as a new
> > > scheme), or is offloading the liability to the user.
> > >
> >
> > The notion that a sub-resource load of a non-EV sort should downgrade the
> EV
> > display status of the page is very questionable.
> >
> > I'm not sure we need namespace separation for EV versus non-EV
> > subresouces.
> >
> > The cause for this is simple:
> >
> > It is the main page resource at the root of the document which causes
> each
> > sub-resource to be loaded.
> >
> > There is a "curatorship", if you will, engaged by the site author.  If
> there are
> > sub-resources loaded in, whether they are EV or not, it is the root page
> > author's place to "take responsibility" for the contents of the DV or EV
> > validated sub-resources that they cause to be loaded.
> >
> > Frankly, I reduce third party origin resources to zero on web
> applications
> on
> > systems I design where those systems have strong security implications.
> >
> > Of course, that strategy is probably not likely to be popular at Google,
> which
> > is, in a quite high percentage of instances, the target origin of all
> kinds of sub-
> > resources loaded in pages across the web.
> >
> > If anyone takes the following comment seriously, this probably spawns an
> > entirely separate conversation: I regard an EV certificate as more of a
> code-
> > signing of a given webpage / website and of the sub-resources whether or
> not
> > same origin, as they descend from the root page load.
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://clicktime.symantec.com/a/1/oq_SYtg88dEoDRxJA115VhfXkFgyjy6paw
> > HDkVPMqrM=?d=pHiUFZpus7xBKMLSCUAfZRndcniHFdqZrXgc-
> > _r0FxYSwiMHScu8QgSvJyE8LSHlko0v84eVoyDMoTZTqKVUvrQ_LxFgoZAq1f-
> > Iw1ESfQHF0h4v_K1IjkBwaIhjNiNXcoOSGp7NnMokKR3ug1bd6esHHwnMamBg
> > Cwow-ecE3suQ9uS4-zfp_NLR0LWp-
> > kXGqFhQqRAfcAImdNz09yApHBItSOYOep3BWfyNMoDnHxlSQJaFx3zhDxV3a-
> > AkndjySZN86maZVN5cDBfq3b_73V2qS22vAabmGLFF5uZN8g8Lxstv8tiVTx9_B
> > PzKFZVzWHsrnnheL-
> > W3D22riTAFkvNYWYFwJ1fHe0NpVNxMU3y4vi7I9_zIoxa24Fox-
> > VmvQlMPLAbZZwHNAumWKMqIhjrtk76Lk7EkqLehoiC9__j0qne7lDkDd47_
> > =https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
I don't want to spend too much time digressing into a discussion of the same
origin policy as a basis for a reasonable security model for the web, but I
hope we could all agree on one thing that was abundantly obvious twenty
years ago, and has only become more obvious:

Anything originally introduced by Netscape is horribly broken and needs to
be replaced.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of
> Matthew Hardeman via dev-security-policy
> Sent: Wednesday, December 13, 2017 2:41 PM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: On the value of EV
> 
> On Tuesday, December 12, 2017 at 3:52:40 PM UTC-6, Ryan Sleevi wrote:
> 
> > Yes. This is the foundation and limit of Web Security.
> >
> >
> https://clicktime.symantec.com/a/1/GrbZLkNqUS91rgzMay4M15oOr3bYABO
> Whq1
> > K3U87RIo=?d=pHiUFZpus7xBKMLSCUAfZRndcniHFdqZrXgc-
> _r0FxYSwiMHScu8QgSvJy
> > E8LSHlko0v84eVoyDMoTZTqKVUvrQ_LxFgoZAq1f-
> Iw1ESfQHF0h4v_K1IjkBwaIhjNiNX
> > coOSGp7NnMokKR3ug1bd6esHHwnMamBgCwow-ecE3suQ9uS4-
> zfp_NLR0LWp-kXGqFhQqR
> > AfcAImdNz09yApHBItSOYOep3BWfyNMoDnHxlSQJaFx3zhDxV3a-
> AkndjySZN86maZVN5c
> > DBfq3b_73V2qS22vAabmGLFF5uZN8g8Lxstv8tiVTx9_BPzKFZVzWHsrnnheL-
> W3D22riT
> > AFkvNYWYFwJ1fHe0NpVNxMU3y4vi7I9_zIoxa24Fox-
> VmvQlMPLAbZZwHNAumWKMqIhjrt
> >
> k76Lk7EkqLehoiC9__j0qne7lDkDd47_=https%3A%2F%2Fen.wikipedia.org%
> 2Fwi
> > ki%2FSame-origin_policy
> >
> > This is what is programatically enforced. Anything else either
> > requires new technology to technically enforce it (such as a new
> > scheme), or is offloading the liability to the user.
> >
> 
> The notion that a sub-resource load of a non-EV sort should downgrade the
EV
> display status of the page is very questionable.
> 
> I'm not sure we need namespace separation for EV versus non-EV
> subresouces.
> 
> The cause for this is simple:
> 
> It is the main page resource at the root of the document which causes each
> sub-resource to be loaded.
> 
> There is a "curatorship", if you will, engaged by the site author.  If
there are
> sub-resources loaded in, whether they are EV or not, it is the root page
> author's place to "take responsibility" for the contents of the DV or EV
> validated sub-resources that they cause to be loaded.
> 
> Frankly, I reduce third party origin resources to zero on web applications
on
> systems I design where those systems have strong security implications.
> 
> Of course, that strategy is probably not likely to be popular at Google,
which
> is, in a quite high percentage of instances, the target origin of all
kinds of sub-
> resources loaded in pages across the web.
> 
> If anyone takes the following comment seriously, this probably spawns an
> entirely separate conversation: I regard an EV certificate as more of a
code-
> signing of a given webpage / website and of the sub-resources whether or
not
> same origin, as they descend from the root page load.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/oq_SYtg88dEoDRxJA115VhfXkFgyjy6paw
> HDkVPMqrM=?d=pHiUFZpus7xBKMLSCUAfZRndcniHFdqZrXgc-
> _r0FxYSwiMHScu8QgSvJyE8LSHlko0v84eVoyDMoTZTqKVUvrQ_LxFgoZAq1f-
> Iw1ESfQHF0h4v_K1IjkBwaIhjNiNXcoOSGp7NnMokKR3ug1bd6esHHwnMamBg
> Cwow-ecE3suQ9uS4-zfp_NLR0LWp-
> kXGqFhQqRAfcAImdNz09yApHBItSOYOep3BWfyNMoDnHxlSQJaFx3zhDxV3a-
> AkndjySZN86maZVN5cDBfq3b_73V2qS22vAabmGLFF5uZN8g8Lxstv8tiVTx9_B
> PzKFZVzWHsrnnheL-
> W3D22riTAFkvNYWYFwJ1fHe0NpVNxMU3y4vi7I9_zIoxa24Fox-
> VmvQlMPLAbZZwHNAumWKMqIhjrtk76Lk7EkqLehoiC9__j0qne7lDkDd47_
> =https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
If you look at where the HTTPS phishing certificates come from, they come
almost
entirely from Let's Encrypt and Comodo.

This is perhaps the best argument in favor of distinguishing between CAs
that care
about phishing and those that don't.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Peter
> Gutmann via dev-security-policy
> Sent: Wednesday, December 13, 2017 4:23 PM
> To: Gervase Markham ; mozilla-dev-security-
> pol...@lists.mozilla.org; Tim Shirley 
> Subject: Re: On the value of EV
> 
> Tim Shirley via dev-security-policy

> writes:
> 
> >But regardless of which (or neither) is true, the very fact that EV
> >certs are rarely (never?) used on phishing sites
> 
> There's no need:
> 
> https://info.phishlabs.com/blog/quarter-phishing-attacks-hosted-https-
> domains
> 
> In particular, "the rate at which phishing sites are hosted on HTTPS pages
is
> rising significantly faster than overall HTTPS adoption".
> 
> It's like SPF and site security seals, adoption by spammers and crooks was
> ahead of adoption by legit users because the bad guys have more need of a
> signalling mechanism like that than anyone else.
> 
> Peter.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
If you look at the phishing data feeds and correlate them with EV certificates,
you'll find out that Tim's "speculation" is right.

In my experience, it's generally a bad idea to disagree with Tim Shirley.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Tim
> Shirley via dev-security-policy
> Sent: Wednesday, December 13, 2017 3:35 PM
> To: r...@sleevi.com
> Cc: mozilla-dev-security-pol...@lists.mozilla.org; Gervase Markham
> 
> Subject: Re: On the value of EV
> 
> No, I’m not presuming that; that’s why I put the ? after never.  I’ve never 
> heard
> of any, so it’s possible it really is never.  But I’m pretty confident in at 
> least the
> “rare” part because I’m sure if you knew of any you’d be sharing examples.  ;)
> 
> 
> From: Ryan Sleevi 
> Reply-To: "r...@sleevi.com" 
> Date: Wednesday, December 13, 2017 at 5:03 PM
> To: Tim Shirley 
> Cc: Gervase Markham , "mozilla-dev-security-
> pol...@lists.mozilla.org" 
> Subject: Re: On the value of EV
> 
> "The very fact that EV certs are rarely (never?) used" is, of course,
> unsubstantiated with data. It's a logically flawed argument - you're presuming
> that non-existence is proof of non-existence.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/1mqhGL6xJbzNGRpvF0vTa3WSnEAQZQF
> 5K8VgNSFvl4s=?d=gqGQYeASiQy2N3lU7K-
> sEhOlQFbmNC2fxAOHBYelo4XflHuD2J9CzlFlH1A4n9gPmfRm7PO65FrdOoGfE
> G4_NkKF6-
> 8MK2zsOPqWmmn1vGp6Vnisxb3aI7shACwoWBG13n7WdXQU7nSrm_tFvcoN
> 9O0NKUrlWvavx4iSGiiXzsDv01k8TE8-Yo_fPj-
> 3jovLn9wEG58glLeHrORIeDZBuxW2AhHJoW4MJTAlfEcVHypFeL1oqs8zKB9LvE
> VIUjqp3uKWLp2zpjq2Kig_eG7zbANgxreRmS4W7SCFZQXf6wwvzxDRQsu0mq-
> AEES6RX6E2oLIYUPGOm92xX7muZtDJiATEc4W4zkWK-OgxI-llU1e4nM-gBlD-
> MdN6MEdFgK31iyhAmp9nahN24LYmBIOZcmZtNEVVi8xWXSKfZ4HRQ94ZCQx
> mxlJBA%3D%3D=https%3A%2F%2Flists.mozilla.org%2Flistinfo%2Fdev-
> security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 5:52:16 PM UTC-6, Peter Gutmann wrote:

> >Sitting on my desk are not less than 3 reference designs.  At least two of
> >them have decent hardware RNG capabilities.  
> 
> My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
> which has hardware RNGs.  Or an OS, for that matter, at least in the sense of
> something Unix-like.  However, in all cases the RNG system is pretty secure,
> you preload a fixed seed at manufacture and then get just enough changing data
> to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
> the very useful taskRegsGet() for which the docs tell you "self-examination is
> not advisable as results are unpredictable", which is perfect).

I agree - and this same technique (the use of a stateful deterministic 
pseudo-random number generator seeded with adequate entropy) - is what I was 
proposing be utilized in the case of the generation of random data needs for EC 
signatures, ECDHE exchanges, etc.

This mechanism is only safe if that seed data process actually happens under 
secure circumstances, but for many devices and device manufacturers that can be 
assured.

> 
> In all of these cases, the device is going to be a safer place to generate
> keys than the CA, in particular because (a) the CA is another embedded
> controller somewhere so probably no better than the target device and (b)
> there's no easy way to get the key securely from the CA to the device.

Agreed, as I mentioned the secure transport aspect is essential for remote key 
generation to be a secure option at any level.

> 
> However, there's also an awful lot of IoS out there that uses shared private
> keys (thus the term "the lesser-known public key" that was used at one
> software house some years ago).  OTOH those devices are also going to be
> running decade-old unpatched kernels with every service turned on (also years-
> old binaries), XSS, hardcoded admin passwords, and all the other stuff that
> makes the IoS such a joy for attackers.  So in that case I think a less-then-
> good private key would be the least of your worries.

So, the platforms I'm talking about are the kind of stuff that sit somewhere in 
the middle of this.  They're intended for professional consumption into the 
device development cycle, intended to be tweaked to the specifics of the use 
case.  Often, the "manufacturer" makes quite few changes to the hardware 
reference design, fewer to the software reference design -- sometimes as 
shallow as branding -- and ships.

A lot of platforms with great potential at the hardware level and shockingly 
under-engineered, minimally designed software stacks are coming to prominence.  
They're cheap and in the right hands can be very effective.  Unfortunately, 
some of these reference software stacks encourage good enough practice that 
they won't be quickly caught out -- no pre-built single shared private key, yet 
a first-boot random initialized with a script that seeds a PRNG with uptime 
microseconds, clock ticks since reset, or something like that, which across 
that line will be a very narrow band of values for a given first boot of a 
given reference design and set of boot scripts.

Nevertheless, many of these stacks do at least minimize extraneous services and 
the target customers (pseudo-manufacturers to manufactures) have gotten savvy 
to ancient kernels and known major remotely exploitable holes.  We could call 
it the Internet of DeceptiveInThatImSomewhatShittyButHideItAtFirstGlance.

> 
> So the bits we need to worry about are what falls between "full of security
> holes anyway" and "things done right".  What is that, and does it matter if
> the private keys aren't perfect?

Agreed and I attempt address the first half of that just above -- my "Internet 
Of ." description.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 6:23 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > I realize I'm doing a poor job at articulating the profound risks,
> perhaps
> > because they're best not for e-mail discussions, but these problems are
> not
> > unique to EV, and the solutions are unquestionably worse (for freedom and
> > privacy). It is in this holistic understanding - including regulatory
> risks
> > of mandatory EV and the like - that it's clear that EV isn't "just"
> > something a site opts into - it has a non-trivial, detrimental affect on
> > users day to day browsing, on the way in which the Internet is
> maintained,
> > in efforts to secure it, and to the underlying privacy and security. This
> > isn't hyperbole - this is something I think most browsers are profoundly
> > aware of.
>
> I think you've done an excellent job of specifically pointing to other
> risks and concerns.  In this last paragraph, however, it seems you've
> alluded to non-specific others of even graver consequence and followed up
> neatly with what feels like a "just trust me on this one."  Well, maybe we
> should, but...  it's just inconsistent.
>

That's certainly not my intent. Rather, I'm trying to highlight that your
'simple' solution is really not quite simple, and the nuances and political
implications of that are non-trivial. Further, trying to capture all of
that nuanced conversation in an email thread, particularly when some
parties actively profit from exploiting misinterpretations, is not an ideal
medium. I tried to provide related links so that you can read and
understand how, in other contexts of governance and security, such
arguments have been captured, and some of the non-obvious, significant
implications of them. I think most browsers are aware of that nuance
because it's an existential awareness about the role they play in the
security of users, and trying to capture all of the carefully considered
philosophy and security in an email thread is, well, not exactly productive
either.


> EV certificates are exclusively a business or organization matter.


No, they aren't. That's the thing - for a number of reasons, they aren't.
Or more aptly, "business or organization" themselves are not neat and
compact things, nor is it safe to presume that what is true today is true
tomorrow.


> The validation process for EV already identifies an authorized party for
> directing issuance.  Update the standards to require that said individual
> be willing to be identified in the certificate as well as identified as to
> the jurisdiction in which his identity validation occurred.  Require that
> the CA engage into a contractual relationship with that individual with
> specifies unpalatable consequences for the individual if misrepresentations
> are made.  This creates a liability that has a cash value that can interest
> "ambulance chasers" as well as law enforcement -- who generally need to
> have a dollar figure value of harm in order to pursue a case which hinges
> on the financial.
>

I'm trying to be respectful, but I think this regard of law enforcement
does not hold up with how the Internet works, nor how the real world works,
nor how financial risk works. I realize this again sounds like "trust me",
because it's hard to point out the experiences day to day of people dealing
with this, or to highlight the many flaws and abuses. And even then, I
don't think it's necessarily a reasonable goal to expect you to be
convinced - those who understand recognize it as such.


> No one's privacy is being improperly compromised.  We all give up some of
> our individual privacy in day to day corporate activities attendant to our
> jobs.  If part of our role involves interfacing with external entities like
> CAs, and making representations on behalf of the business, we are already
> exposed.


> No one is twisting the business or individual to comply.  There's this
> benefit of getting an EV certificate available to them if and only if they
> comply with the requirements.
>

This is not true. CAs tried to get PCI-DSS to require this for accepting
credit cards online. And naively, we might say, "That sounds good, I'm sure
that would prevent fraud" - but it also promotes and enables abuse. Or we
can look at several governments trying to require - whether at a government
level, at a 'doing business with government', or a 'doing any business with
our citizens' level, attempts to mandate EV - which precludes a number of
participants from engaging in online secure transactions or communications.
There's even been attempts to require "If you're within our jurisdiction,
you MUST use an EV cert" - which is balderdash!

I encourage you to revisit the resources I linked you to, spend an
afternoon reading them. The discussions around ICANN, RDAP, and WHOIS
privacy are very insightful in understanding the tradeoffs to privacy.
Legislation such as eIDAS, particularly if you can find counsel to walk you

Re: [FORGED] Re: CA generated keys

2017-12-13 Thread Peter Gutmann via dev-security-policy
Matthew Hardeman via dev-security-policy 
 writes:

>In principle, I support Mr. Sleevi's position, practically I lean toward Mr.
>Thayer's and Mr. Hollebeek's position.

I probably support at least one of those, if I can figure out who's been
quoted as saying what.

>Sitting on my desk are not less than 3 reference designs.  At least two of
>them have decent hardware RNG capabilities.  

My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
which has hardware RNGs.  Or an OS, for that matter, at least in the sense of
something Unix-like.  However, in all cases the RNG system is pretty secure,
you preload a fixed seed at manufacture and then get just enough changing data
to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
the very useful taskRegsGet() for which the docs tell you "self-examination is
not advisable as results are unpredictable", which is perfect).

In all of these cases, the device is going to be a safer place to generate
keys than the CA, in particular because (a) the CA is another embedded
controller somewhere so probably no better than the target device and (b)
there's no easy way to get the key securely from the CA to the device.

However, there's also an awful lot of IoS out there that uses shared private
keys (thus the term "the lesser-known public key" that was used at one
software house some years ago).  OTOH those devices are also going to be
running decade-old unpatched kernels with every service turned on (also years-
old binaries), XSS, hardcoded admin passwords, and all the other stuff that
makes the IoS such a joy for attackers.  So in that case I think a less-then-
good private key would be the least of your worries.

So the bits we need to worry about are what falls between "full of security
holes anyway" and "things done right".  What is that, and does it matter if
the private keys aren't perfect?

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 5:08:05 PM UTC-6, Matt Palmer wrote:

> > There is a "curatorship", if you will, engaged by the site author.  If
> > there are sub-resources loaded in, whether they are EV or not, it is the
> > root page author's place to "take responsibility" for the contents of the
> > DV or EV validated sub-resources that they cause to be loaded.
> 
> Oh, if only that were true -- then every site that embedded a third-party ad
> network that served up malware could be done under the CFAA, and the world
> would be a much, much better place.
> 
> But it isn't, and your "curatorship" model of the web, whilst a lovely idea,
> is completely unsupported by reality.

I concur that today far fewer than should have acted in accordance with such a 
model.

But that could change any time.   As the web platform's general capabilities 
expand, more and further abuses driven by site authors are going to reshape 
that paradigm.

It seems inevitable.  We've got frameworks for burning browsers' CPU and energy 
to mine alt-coints in the background while you're served up cat memes.

It has been a presumption, up to this point, that a person visiting a website 
has agreed within non-destructive limits to have their browser/computer perform 
whatever tasks the website says to.

The kinds of abuses evolving won't permit such an assumption moving forward.

A modern website is software like any other.  While it lives in a sandbox, it's 
still software driving a computer.

People have and do go to prison, even for terms approximating their life spans, 
for the creation and distribution of malware.

There's no real technological or logical reason that a sufficiently complex 
website is any different.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Peter Gutmann via dev-security-policy
Tim Shirley via dev-security-policy  
writes:

>But regardless of which (or neither) is true, the very fact that EV certs are
>rarely (never?) used on phishing sites

There's no need:

https://info.phishlabs.com/blog/quarter-phishing-attacks-hosted-https-domains

In particular, "the rate at which phishing sites are hosted on HTTPS pages is
rising significantly faster than overall HTTPS adoption".

It's like SPF and site security seals, adoption by spammers and crooks was
ahead of adoption by legit users because the bad guys have more need of a
signalling mechanism like that than anyone else.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Matt Palmer via dev-security-policy
On Wed, Dec 13, 2017 at 01:40:35PM -0800, Matthew Hardeman via 
dev-security-policy wrote:
> I'm not sure we need namespace separation for EV versus non-EV subresouces.
> 
> The cause for this is simple:
> 
> It is the main page resource at the root of the document which causes each
> sub-resource to be loaded.
> 
> There is a "curatorship", if you will, engaged by the site author.  If
> there are sub-resources loaded in, whether they are EV or not, it is the
> root page author's place to "take responsibility" for the contents of the
> DV or EV validated sub-resources that they cause to be loaded.

Oh, if only that were true -- then every site that embedded a third-party ad
network that served up malware could be done under the CFAA, and the world
would be a much, much better place.

But it isn't, and your "curatorship" model of the web, whilst a lovely idea,
is completely unsupported by reality.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
I'm saying that even 'rarely' is presumptive - that is, that the lack of
public evidence is equivalent to a lack of occurrence.

As to sharing examples, it presumes that the point of discussion is whether
EV is an effective mitigator of phishing, which is a logically flawed
viewpoint assuming correlation, if any, is equivalent to causation, or that
the correlation is meaningfully significant for the discussion of security.

If the concern is phishing, we know more effective mitigators exist - both
in terms of technology and user experience - so the continued focus on
certificates, particularly EV, whether as a primary or a 'boots and
suspenders' approach to mitigation is misguided.

If the concern is fraud, then we already have the existence proof to show
the fundamental flaw in assuming a fraud mitigation. An exploit doesn't
have to be used in the wild for it to be an exploit. Although that is
itself its own topic of discussion - how vendors approach exploits.

Regardless, it can be categorically stated that it does not prevent fraud

On Wed, Dec 13, 2017 at 5:35 PM, Tim Shirley  wrote:

> No, I’m not presuming that; that’s why I put the ? after never.  I’ve
> never heard of any, so it’s possible it really is never.  But I’m pretty
> confident in at least the “rare” part because I’m sure if you knew of any
> you’d be sharing examples.  ;)
>
>
>
>
>
> *From: *Ryan Sleevi 
> *Reply-To: *"r...@sleevi.com" 
> *Date: *Wednesday, December 13, 2017 at 5:03 PM
> *To: *Tim Shirley 
> *Cc: *Gervase Markham , "mozilla-dev-security-policy@
> lists.mozilla.org" 
> *Subject: *Re: On the value of EV
>
>
>
> "The very fact that EV certs are rarely (never?) used" is, of course,
> unsubstantiated with data. It's a logically flawed argument - you're
> presuming that non-existence is proof of non-existence.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Matt Palmer via dev-security-policy
On Wed, Dec 13, 2017 at 05:58:38PM +, Tim Shirley via dev-security-policy 
wrote:
> So many of the arguments made here, such as this one, as well as the
> recent demonstrations that helped start this thread, focus on edge cases. 
> And while those are certainly valuable to consider, they obscure the fact
> that “Green Bar” adds value in the mainstream use cases.  If we were
> talking about how to improve EV, then by all means focus on the edge
> cases.  The thing I don’t see in all this is a compelling argument to take
> away something that’s useful most of the time.

That assumes it's useful most of the time.  I don't believe there's evidence
that the EV UI is -- all the rigorous research I'm aware of shows that the
EV UI is rarely "useful" to users.

Even in the rare case of a user that knows to look for the EV indication,
the information that the EV UI presents is demonstrably insufficient for the
purposes you wish to use it for.  Anyone who wants to use the information
present in an EV certificate to make trust decisions needs to dig into the
cert info screen to determine *which* "FooBar Holdings Inc." they're talking
to when they visit https://example.com.

So, the current situation is that the EV UI is useless for *everyone*. 
There's two options to "fix" it:

* Insert even more information into the EV "green bar", for the benefit of
  the tiny fraction of users who know and care what that information
  actually is; or

* Remove it as being insufficiently valuable to users-in-aggregate.

I have my doubts that there has ever been a situation in which adding more
information to a UI element that users already ignore and don't understand
has improved user experience, so I'm not expecting stuffing
jurisdictionOfIncorporation, registration numbers, and all manner of other
stuff into the green bar is going to improve matters.  So I'm in favour of
removing the UI element entirely.

As others have mentioned, there's no reason why, if browsers were to remove
the "green bar", EV certificates need to necessarily go away[1].  The tiny
subset of users who wish to examine the identity of the organisation behind
the site that sent them a form (not necessarily the same organisation as the
one they'll be sending the form data to, as Nick Lamb has explained), they
can open the cert viewer and dig in.  It's simply that there's no compelling
evidence that putting an organisation name and country in a green bar is
sufficiently valuable for users-in-aggregate to be worth keeping it.

- Matt

[1] CAs are fine to keep selling EV certificates, and marketing them in
whatever way they see fit, if they like.  OV certs are still a thing
despite conveying no UI advantage, so there's no more reason to believe
EV will cease to be a thing just because browsers remove the green bar,
than there is evidence that the EV UI is useful.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Tim Shirley via dev-security-policy
No, I’m not presuming that; that’s why I put the ? after never.  I’ve never 
heard of any, so it’s possible it really is never.  But I’m pretty confident in 
at least the “rare” part because I’m sure if you knew of any you’d be sharing 
examples.  ;)


From: Ryan Sleevi 
Reply-To: "r...@sleevi.com" 
Date: Wednesday, December 13, 2017 at 5:03 PM
To: Tim Shirley 
Cc: Gervase Markham , 
"mozilla-dev-security-pol...@lists.mozilla.org" 

Subject: Re: On the value of EV

"The very fact that EV certs are rarely (never?) used" is, of course, 
unsubstantiated with data. It's a logically flawed argument - you're presuming 
that non-existence is proof of non-existence.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 5:19 PM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> There are also the really cool hash-based revocation ideas that actually
> do help
> even against active attackers on the same network.  I really wish those
> ideas got
> more serious attention.
>
> -Tim
>

I'm not sure it's fair to conclude that the lack of implementation is
equivalent to lack of serious attention :)

But there's also a host of systemic policy issues with revocation that
would need to be tackled. Which is not to say we shouldn't explore them -
but that they may not be the long pole or the tricky part, so we may want
to make sure we're holistically prioritizing the right stuff :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: On the value of EV

2017-12-13 Thread Tim Hollebeek via dev-security-policy
There are also the really cool hash-based revocation ideas that actually do help
even against active attackers on the same network.  I really wish those ideas 
got
more serious attention.

-Tim

> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+tim.hollebeek=digicert@lists.mozilla.org] On Behalf Of Tim
> Shirley via dev-security-policy
> Sent: Wednesday, December 13, 2017 2:47 PM
> To: Gervase Markham ; mozilla-dev-security-
> pol...@lists.mozilla.org
> Subject: Re: On the value of EV
> 
> As I understand it, Adam’s argument there was that to get value out of a
> revoked certificate, you need to be between the user and the web server so
> you can direct the traffic to your web server, so you’re already in position 
> to
> also block revocation checks.  I don’t think that maps here because a lot of 
> the
> scenarios EV assists with don’t involve an attacker being in that position.
> 
> I know the question has been raised before as to why most phishing sites use
> DV.  Some argue it’s because OV/EV are harder for people with bad intent to
> obtain.  Some argue it’s because DV is more ubiquitous across the web and
> thus more ubiquitous on phishing sites.  But regardless of which (or neither) 
> is
> true, the very fact that EV certs are rarely (never?) used on phishing sites 
> is in
> and of itself providing protection today to those of us who pay attention to 
> it.
> I’d argue that alone means the seat belt isn’t worthless, and we should focus
> on building better seat belts rather than cutting them out and relying on the
> air bag alone.
> 
> 
> 
> On 12/13/17, 3:46 PM, "Gervase Markham via dev-security-policy"  security-pol...@lists.mozilla.org> wrote:
> 
> On 13/12/17 11:58, Tim Shirley wrote:
> > So many of the arguments made here, such as this one, as well as the
> recent demonstrations that helped start this thread, focus on edge cases.  And
> while those are certainly valuable to consider, they obscure the fact that
> “Green Bar” adds value in the mainstream use cases.  If we were talking about
> how to improve EV, then by all means focus on the edge cases.  The thing I
> don’t see in all this is a compelling argument to take away something that’s
> useful most of the time.
> 
> My concern with this argument is that it's susceptible to the criticism
> that Adam Langley made of revocation checking:
> https://clicktime.symantec.com/a/1/oIyM4YfpaH03Is-
> zFRH8AJzVNaqfkUt09K3WEgNPHOw=?d=uXDB34hHU71idZadw5ip3nRlsyu-
> Farb4fe8P50v8eGeFFyo2uKWwJ4Owcn1ya1sP6zsnOxx541A3GOFiGV3Cf5xeA
> C4qommBEsD51KyHnN1oECe8T_yt8LZ6ZCjx8lUkHA5M71KtHURAAzZWV7FY
> W2u82WBSW6GLHWpUZAjFGUha5-
> UmlfcwC2w_ObguO5luns9CJP7vlg2dgz6CGb-
> qAUfdN84H9LFGImuQWG9kuOWmMJcPEtw37KtxFYHCUMUhYVoEv863RTwkj
> agPy1iVmYeDYR3xVul3nPvwyGqiZxJFxeziNE-
> gCzFthw99KCm3R75bz2c8DaSqvfSupR5AeE0exbXmWyQsLe7rCIHgOOKttvpaa
> uSMp0gMzX-
> AKZKGFpnyyt0VDxm9VA1jGMekaZ0QJfVj_l_rAFBGuauBVoWFBg_LOH5tQ%3D
> %3D=https%3A%2F%2Fscanmail.trustwave.com%2F%3Fc%3D4062%26d%
> 3DkJGx2vx-xMRho_TXqyD3e8mI4fM_V-
> yKUKn2tKZHNQ%26s%3D5%26u%3Dhttps%3A%2F%2Fwww.imperialviolet.or
> g%2F2012%2F02%2F05%2Fcrlsets.html
> 
> "So [EV identity is] like a seat-belt that snaps when you crash. Even
> though it works 99% of the time, it's worthless because it only works
> when you don't need it."
> 
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> 
> https://clicktime.symantec.com/a/1/x_xKxQuisdeMQ99vEcSB8fYv9i3RYV6ppz
> _my_vxbgs=?d=uXDB34hHU71idZadw5ip3nRlsyu-
> Farb4fe8P50v8eGeFFyo2uKWwJ4Owcn1ya1sP6zsnOxx541A3GOFiGV3Cf5xeA
> C4qommBEsD51KyHnN1oECe8T_yt8LZ6ZCjx8lUkHA5M71KtHURAAzZWV7FY
> W2u82WBSW6GLHWpUZAjFGUha5-
> UmlfcwC2w_ObguO5luns9CJP7vlg2dgz6CGb-
> qAUfdN84H9LFGImuQWG9kuOWmMJcPEtw37KtxFYHCUMUhYVoEv863RTwkj
> agPy1iVmYeDYR3xVul3nPvwyGqiZxJFxeziNE-
> gCzFthw99KCm3R75bz2c8DaSqvfSupR5AeE0exbXmWyQsLe7rCIHgOOKttvpaa
> uSMp0gMzX-
> AKZKGFpnyyt0VDxm9VA1jGMekaZ0QJfVj_l_rAFBGuauBVoWFBg_LOH5tQ%3D
> %3D=https%3A%2F%2Fscanmail.trustwave.com%2F%3Fc%3D4062%26d%
> 3DkJGx2vx-xMRho_TXqyD3e8mI4fM_V-
> yKUK2gu_0caA%26s%3D5%26u%3Dhttps%3A%2F%2Flists.mozilla.org%2Flisti
> nfo%2Fdev-security-policy
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://clicktime.symantec.com/a/1/kC1oRrCZP6m7Z3MS1qjK8ooXeqYR94Ar
> rjso4ZlYIqk=?d=uXDB34hHU71idZadw5ip3nRlsyu-
> Farb4fe8P50v8eGeFFyo2uKWwJ4Owcn1ya1sP6zsnOxx541A3GOFiGV3Cf5xeA
> C4qommBEsD51KyHnN1oECe8T_yt8LZ6ZCjx8lUkHA5M71KtHURAAzZWV7FY
> W2u82WBSW6GLHWpUZAjFGUha5-
> UmlfcwC2w_ObguO5luns9CJP7vlg2dgz6CGb-
> qAUfdN84H9LFGImuQWG9kuOWmMJcPEtw37KtxFYHCUMUhYVoEv863RTwkj
> agPy1iVmYeDYR3xVul3nPvwyGqiZxJFxeziNE-
> gCzFthw99KCm3R75bz2c8DaSqvfSupR5AeE0exbXmWyQsLe7rCIHgOOKttvpaa
> uSMp0gMzX-
> AKZKGFpnyyt0VDxm9VA1jGMekaZ0QJfVj_l_rAFBGuauBVoWFBg_LOH5tQ%3D
> 

Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 4:46 PM, Tim Shirley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As I understand it, Adam’s argument there was that to get value out of a
> revoked certificate, you need to be between the user and the web server so
> you can direct the traffic to your web server, so you’re already in
> position to also block revocation checks.  I don’t think that maps here
> because a lot of the scenarios EV assists with don’t involve an attacker
> being in that position.
>

Well, no, that's misunderstanding the point. The revocation checks
themselves are the placebo - because they are not hard fail, you don't get
benefit from doing them, precisely because they can be blocked. However,
they cannot be hard fail, because the failures are expected under
non-adversarial models. So because hard fail is not viable (and Rob's
recent sharing of OCSP status of CAs truly captures that better than a
thousand words on the topic), nor even if CAs had sound infrastructure is
it viable on the "internet at large" (that is, CAs are definitely the
worst, but even if they were perfect, it'd still be unacceptably bad), then
revocation doesn't provide value.

The same argument being applied to EV here is that the presumptive value of
EV is that it either provides technical mitigations OR the display to the
user provides sufficient value to allow for an informed decision. However,
the technical mitigations are non-existant, and the value of the display is
not sufficient enough on an adversarial model. In both cases, much like
hardfail, the structural deficiencies mean that the system is unreliable,
and because the system is unreliable, it should not be relied upon during
critical times. It is the weak link.

I know for non-security folks, this seems like throwing the baby out with
the bathwater, because it's "mostly good". But much like broken messaging
encryption is still broken, even if it's encrypted, the flaws of EV are
systemic, and mean you cannot and should not rely upon it (and of course
the EVGs disclaim reliance for a number of cases), and so the whole notion
of it is flawed. If you cannot rely on it, why rely on it?


>
> I know the question has been raised before as to why most phishing sites
> use DV.  Some argue it’s because OV/EV are harder for people with bad
> intent to obtain.  Some argue it’s because DV is more ubiquitous across the
> web and thus more ubiquitous on phishing sites.  But regardless of which
> (or neither) is true, the very fact that EV certs are rarely (never?) used
> on phishing sites is in and of itself providing protection today to those
> of us who pay attention to it.  I’d argue that alone means the seat belt
> isn’t worthless, and we should focus on building better seat belts rather
> than cutting them out and relying on the air bag alone.
>

"The very fact that EV certs are rarely (never?) used" is, of course,
unsubstantiated with data. It's a logically flawed argument - you're
presuming that non-existence is proof of non-existence. The irony is this
is the same argument made 10 years ago with certificates in general
(flashback proof -
http://voices.washingtonpost.com/securityfix/2006/02/the_new_face_of_phishing_1.html
) - that certificates prevented phishing, because you never saw phishing on
HTTPS.

Such arguments are both logically unsound and do not reflect on what
phishing/fraud are used for, or in general, how security works. It also
underscores the continued user-hostile message - which is to say, the user
is responsible for inspecting this UI, 100% of the time, if they want to be
safe from phishing. It doesn't align with the HCI research on positive
indicators, nor is it a fair or reasonable thing to ask of the average
user.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 4:40 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tuesday, December 12, 2017 at 3:52:40 PM UTC-6, Ryan Sleevi wrote:
>
> > Yes. This is the foundation and limit of Web Security.
> >
> > https://en.wikipedia.org/wiki/Same-origin_policy
> >
> > This is what is programatically enforced. Anything else either requires
> new
> > technology to technically enforce it (such as a new scheme), or is
> > offloading the liability to the user.
> >
>
> The notion that a sub-resource load of a non-EV sort should downgrade the
> EV display status of the page is very questionable.
>

This is what Opera did until recently, and which early versions of Chrome
have done, and what some Safari engineers have proposed. I agree, it's very
questionable - that's part of why I was happy to see it removed - but it's
not uncontroversial.


> I'm not sure we need namespace separation for EV versus non-EV subresouces.
>

Without that separation, then you cannot be safe against EV downgrade
attacks (even if ever-vigilant as an end-user, there are ways to subvert
the state while maintaining the UI). Similarly, you cannot be safe against
"EV confusibles".

Under both attacks, the normal rejoinder is "The user should look" - but
what they're looking at, they can't trust. The rejoinder from this thread
is "Yes, but if most of the time they can trust, it's not so bad" - to
which I reply, yes, it is bad, if they're trusting something they can't
trust and just getting lucky that a stopped clock is right twice a day.


> Frankly, I reduce third party origin resources to zero on web applications
> on systems I design where those systems have strong security implications.
>
> Of course, that strategy is probably not likely to be popular at Google,
> which is, in a quite high percentage of instances, the target origin of all
> kinds of sub-resources loaded in pages across the web.
>

Nope. This is incredibly popular, both in Google at others. However, rather
than needing to reduce those resources to zero, you can use things like
Subresource Integrity to provide assurances.

But you're correct in that it's missing the point of the discussion - those
resources don't affect state, and the users assurances - even of the UI
shown - has technical limits that prevent it from meeting user expectations.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Gijs Kruitbosch via dev-security-policy

On 13/12/2017 14:50, Tim Shirley wrote:

I guess I’m also having a hard time appreciating how the presence of this 
information is a “cost” to users who don’t care about it.  For one thing, it’s 
been there for years in all major browsers, so everyone has at least been 
conditioned to its presence already.  But how is someone who isn’t interested 
in the information in the first place being confused by it?  And if the mere 
presence of an organization name is creating confusion,
In addition to what Ryan said, speaking as an engineer who's worked on 
the Firefox URL bar, the EV indicator also has a non-trivial cost in 
terms of implementation/UI-design complexity.


On a purely practical level, displaying a longer EV entity string 
implies less of the actual URL string is visible to the user, which in 
itself is a risk for phishing.



 then surely a URL with lots of words and funny characters in it would be 
confusing people too, and we should remove that too, right?


I know you're speaking in jest, but yes. This is exactly why Safari 
doesn't show the URL path/querystring etc. in the URL bar when the URL 
isn't being edited (only the domain and/or EV name). We may or may not 
end up doing something similar (ie lose path/querystring/hash) in 
Firefox, but either way there are definitely reasonable arguments for 
doing something along those lines.


Going further off-topic, as people have already implied, perhaps we want 
other trust UI that provides more meaningful information to users about 
the trust status of a page, that is easier to understand than a URL or 
scheme/hostname/port combination. But we don't need to block removing EV 
UI on that if there's consensus that EV UI doesn't add (sufficient) 
value to remain in browsers.


~ Gijs
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Tim Shirley via dev-security-policy
As I understand it, Adam’s argument there was that to get value out of a 
revoked certificate, you need to be between the user and the web server so you 
can direct the traffic to your web server, so you’re already in position to 
also block revocation checks.  I don’t think that maps here because a lot of 
the scenarios EV assists with don’t involve an attacker being in that position.

I know the question has been raised before as to why most phishing sites use 
DV.  Some argue it’s because OV/EV are harder for people with bad intent to 
obtain.  Some argue it’s because DV is more ubiquitous across the web and thus 
more ubiquitous on phishing sites.  But regardless of which (or neither) is 
true, the very fact that EV certs are rarely (never?) used on phishing sites is 
in and of itself providing protection today to those of us who pay attention to 
it.  I’d argue that alone means the seat belt isn’t worthless, and we should 
focus on building better seat belts rather than cutting them out and relying on 
the air bag alone.

 

On 12/13/17, 3:46 PM, "Gervase Markham via dev-security-policy" 
 wrote:

On 13/12/17 11:58, Tim Shirley wrote:
> So many of the arguments made here, such as this one, as well as the 
recent demonstrations that helped start this thread, focus on edge cases.  And 
while those are certainly valuable to consider, they obscure the fact that 
“Green Bar” adds value in the mainstream use cases.  If we were talking about 
how to improve EV, then by all means focus on the edge cases.  The thing I 
don’t see in all this is a compelling argument to take away something that’s 
useful most of the time.

My concern with this argument is that it's susceptible to the criticism
that Adam Langley made of revocation checking:

https://scanmail.trustwave.com/?c=4062=kJGx2vx-xMRho_TXqyD3e8mI4fM_V-yKUKn2tKZHNQ=5=https%3a%2f%2fwww%2eimperialviolet%2eorg%2f2012%2f02%2f05%2fcrlsets%2ehtml

"So [EV identity is] like a seat-belt that snaps when you crash. Even
though it works 99% of the time, it's worthless because it only works
when you don't need it."

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

https://scanmail.trustwave.com/?c=4062=kJGx2vx-xMRho_TXqyD3e8mI4fM_V-yKUK2gu_0caA=5=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fdev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 4:28 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wednesday, December 13, 2017 at 2:46:10 PM UTC-6, Gervase Markham wrote:
>
> > My concern with this argument is that it's susceptible to the criticism
> > that Adam Langley made of revocation checking:
> > https://www.imperialviolet.org/2012/02/05/crlsets.html
> >
> > "So [EV identity is] like a seat-belt that snaps when you crash. Even
> > though it works 99% of the time, it's worthless because it only works
> > when you don't need it."
>
> This aspect considers only the potential downsides of improper trust and
> confidence in the users' mind given improper use of a look-alike
> certificate leading to a phishing exploit or similar.
>
> There is a benefit of EV certificates that deserves consideration:
>
> There are events, many per day, in which the additional confidence to
> engage in commerce with a given website is properly enhanced by the user's
> examination and assessment of the EV presentation.  Any such instance is a
> tangible benefit, deserving -- I believe -- some weight in the discussion
> even if there were the rare negative outcome.
>

I think the flaw in this argument is that 'properly enhanced' has not been
demonstrated. It's quite literally the "works 99% of the time" case being
referred to here. Whether or not it's reasonable for the user to rely on
that information is rather the key.


> Just spitballing, one enhancement to the EV issuance might be to require
> that upon validation, the proposed EV entity name and jurisdiction name
> proposed to be included in the certificate have a 30 days
> publish-for-opposition embargo.  It would arise each time a new EV
> validation is performed, including for EV validation renewals.  Further
> certificates could issue or re-issue within the validation life time.  A
> natural service that would arise from this would be that CAs would
> presumably police this publish-for-opposition database for their own
> customers' EV names and name-a-likes, working with their existing customer
> to object to and stop issuance of the (presumed) phishing certificate
> request.
>
> Another thing that should not be problematic for legitimate businesses --
> even gigantic ones -- is to require that EV certificate qualification and
> validation process identify a strongly identified individual (government
> photo ID, etc) be explicitly authorized by the applying entity as
> authorized to request EV certificates for the entity -- and furthermore --
> document within the certificate the name, jurisdiction, and nature of
> documents verifying that person's identification.
>
> Would Ian have requested a certificate for Stripe, Inc. if his full name
> were also in that certificate?  Maybe, maybe not.  But anyone investigating
> that certificate would need do no extra work to know what individual they
> should start communicating with to further discern the history and use of
> that certificate and the associated entity.


There are a number of problems with this, although I appreciate the
suggestion.

Governance is not something easily spitballed. I've tried to highlight the
WIPO process as one example of showing how complex such deliberations can
be, especially in an international/transnational situation. I think, for
this specific proposal, you might look at resources such as
https://www.eff.org/issues/icann to see that many of these proposals you're
discussing have profound policy impact on Internet governance.
Alternatively, you may find
https://www.techdirt.com/articles/20150623/17321931439/icanns-war-whois-privacy.shtml
useful discussion

I realize I'm doing a poor job at articulating the profound risks, perhaps
because they're best not for e-mail discussions, but these problems are not
unique to EV, and the solutions are unquestionably worse (for freedom and
privacy). It is in this holistic understanding - including regulatory risks
of mandatory EV and the like - that it's clear that EV isn't "just"
something a site opts into - it has a non-trivial, detrimental affect on
users day to day browsing, on the way in which the Internet is maintained,
in efforts to secure it, and to the underlying privacy and security. This
isn't hyperbole - this is something I think most browsers are profoundly
aware of.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Tuesday, December 12, 2017 at 3:52:40 PM UTC-6, Ryan Sleevi wrote:

> Yes. This is the foundation and limit of Web Security.
> 
> https://en.wikipedia.org/wiki/Same-origin_policy
> 
> This is what is programatically enforced. Anything else either requires new
> technology to technically enforce it (such as a new scheme), or is
> offloading the liability to the user.
> 

The notion that a sub-resource load of a non-EV sort should downgrade the EV 
display status of the page is very questionable.

I'm not sure we need namespace separation for EV versus non-EV subresouces.

The cause for this is simple:

It is the main page resource at the root of the document which causes each 
sub-resource to be loaded.

There is a "curatorship", if you will, engaged by the site author.  If there 
are sub-resources loaded in, whether they are EV or not, it is the root page 
author's place to "take responsibility" for the contents of the DV or EV 
validated sub-resources that they cause to be loaded.

Frankly, I reduce third party origin resources to zero on web applications on 
systems I design where those systems have strong security implications.

Of course, that strategy is probably not likely to be popular at Google, which 
is, in a quite high percentage of instances, the target origin of all kinds of 
sub-resources loaded in pages across the web.

If anyone takes the following comment seriously, this probably spawns an 
entirely separate conversation: I regard an EV certificate as more of a 
code-signing of a given webpage / website and of the sub-resources whether or 
not same origin, as they descend from the root page load.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 4:14 PM, Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Monday, December 11, 2017 at 6:01:25 PM UTC-6, Ryan Sleevi wrote:
>
> > > Not really - what matters is that the user insists they got had via a
> > > phishing link or other process - that can certainly be verified after
> the
> > > fact
> >
> >
> > No.
>
> Why's that?  This is how investigations begin.
>

I think you're operating on a somewhat reductionist view that doesn't align
with the real world experiences of users. That's not to say it doesn't make
a good narrative, and one that concievably could happen, but it doesn't
align with the common cause.

An example that I admit is contrived, but no more than I think your
original case was, is a user on an ephemeral messaging app receiving a
link. No 'investigation' can happen because that message is no longer
available. You can replace this with "I deleted the email after I got
hacked" or "I think I clicked on something, I'm not sure" (such as a banner
ad).

Further, fraud itself is based on cost. The investigatory cost of finding
'what' hacked the user and 'when' is a profoundly expensive case, and thus
the cost of doing that (routinely) versus the cost of eating the fraud
and/or shifting the liability quickly is the attractive cost. This is no
different then the real-world where storefronts build in models for 'loss'
(that is, shoplifting), because the cost to the brand and the store to
aggressively police such quickly outweighs the potential losses.

>
> I'm confused.  I never suggested that the user's assertion that they saw
> this would cause any change in liability.  In fact, anyone responsible
> would explain that to the user even as the user tries to report it.  The
> value would be that if the certificate that managed to sway the user can be
> found and tracked down (which should be possible via CT), the possibility
> exists that the person(s) responsible for the deception may ultimately be
> caused to suffer for their deception.
>

The problem is that you are shifting the liability to the user, but may not
be realizing it. Your presumed model is that the information that swayed
the user was correct and accurate to the extent the user was fooled. Yet
there's no reason to believe the user checked for "Stripe, Inc [US]", they
could have just looked for "Striping, Inc [US]" and not realized the
confusion.

I also think you're unrealistically relying on CT to detect what you
believe may be modeled as fraud (expecting, I presume, something similar to
Ian's level of attack), whereas I'm saying even at the most bare minimum,
this doesn't functionally mitigate, because it's still assuming the user
has checked the URL bar to ensure there's enough similarity such that you
would be able to, with some model (ML? what) detect other certs that
'maybe' were involved - but you can't be sure, since Ian's "Stripe, Inc" is
a fully legitimate certificate, so you don't *know* he was involved.


> One should presume that if the EV presented certificate confused the user
> who relied upon it into thinking they were dealing with a particular party,
> that the contents should contain sequences or homograph sequences that
> closely mirror what the real site would indicate.
>

But there's the rub - for EV to be valuable, you're saying the
responsibility *is* on the user to do *at minimum* that level of checking.
If they haven't, then you can't link - because you can't expect that it
will contain sequences or homoglyphs or homographs that are similar. It
could just be "J Random EV" cert.

That's what I mean by shifting the liability - in order for there to be any
investigation, the user must have been perfect, 100% of the time, *and* the
attacker must not have exploited any of the technical means mentioned.


> Security research is legitimate.  The people who created these entities
> and got these certificates are innocent of any crime.  What they are not is
> immune from reasonable investigation to show this.


Yes. They are.

If someone suggested tomorrow that an EV certificate caused that person to
> believe that they were at the Stripe site, it would be entirely reasonable
> for any law enforcement agency or investigator to track down these
> researchers and ask them to explain why they sought certificates and entity
> creation that seem engineered to deceive.


No. It wouldn't. Innocence until proven guilty is a virtue, and at least in
the context of EV certificates, there is zero legitimate reason to be
suspicious. Under that model, it becomes quite easy to harass competitors,
for example. This is why complex processes exist (e.g. the WIPO process).


>   The matter should resolve when they show legitimate cause.  This doesn't
> mean that they should be given a free pass and ignored, if subsequently,
> someone phishes a Stripe customer by way of a look-alike entity and cert.
>

No, this is an entirely unreasonable burden, and rather 

Re: On the value of EV

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 2:46:10 PM UTC-6, Gervase Markham wrote:

> My concern with this argument is that it's susceptible to the criticism
> that Adam Langley made of revocation checking:
> https://www.imperialviolet.org/2012/02/05/crlsets.html
> 
> "So [EV identity is] like a seat-belt that snaps when you crash. Even
> though it works 99% of the time, it's worthless because it only works
> when you don't need it."

This aspect considers only the potential downsides of improper trust and 
confidence in the users' mind given improper use of a look-alike certificate 
leading to a phishing exploit or similar.

There is a benefit of EV certificates that deserves consideration:

There are events, many per day, in which the additional confidence to engage in 
commerce with a given website is properly enhanced by the user's examination 
and assessment of the EV presentation.  Any such instance is a tangible 
benefit, deserving -- I believe -- some weight in the discussion even if there 
were the rare negative outcome.

This is even more the case if there are mitigations to the EV definition, 
qualifications, validation process, issuance process, etc, which could help.

Just spitballing, one enhancement to the EV issuance might be to require that 
upon validation, the proposed EV entity name and jurisdiction name proposed to 
be included in the certificate have a 30 days publish-for-opposition embargo.  
It would arise each time a new EV validation is performed, including for EV 
validation renewals.  Further certificates could issue or re-issue within the 
validation life time.  A natural service that would arise from this would be 
that CAs would presumably police this publish-for-opposition database for their 
own customers' EV names and name-a-likes, working with their existing customer 
to object to and stop issuance of the (presumed) phishing certificate request.

Another thing that should not be problematic for legitimate businesses -- even 
gigantic ones -- is to require that EV certificate qualification and validation 
process identify a strongly identified individual (government photo ID, etc) be 
explicitly authorized by the applying entity as authorized to request EV 
certificates for the entity -- and furthermore -- document within the 
certificate the name, jurisdiction, and nature of documents verifying that 
person's identification.

Would Ian have requested a certificate for Stripe, Inc. if his full name were 
also in that certificate?  Maybe, maybe not.  But anyone investigating that 
certificate would need do no extra work to know what individual they should 
start communicating with to further discern the history and use of that 
certificate and the associated entity.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 3:50 PM, Tim Shirley  wrote:

> I’m not looking for a guarantee.  Nothing is ever going to meet that
> standard.  What I’m looking for is something that’s going to improve my
> odds.  What I see in Ian’s and James’s research is some ways that it’s
> possible to create confusion, accidentally or deliberately. But I haven’t
> heard of any real world cases where such deception was used deliberately to
> date.
>

Nor did CAs hear about real-world cases about MD5 or SHA-1 until, well, it
was too late. Look at the fact that the CA/Browser Forum was actively
debating extending SHA-1's lifetime (indefinitely, as proposed by some
CAs), up to the very morning that it was publicly shown as broken - despite
years of warning.


> And I’d expect, since Certificate Transparency has been required for a
> couple years now for EV treatment in Chrome, that if such attacks were
> actually happening in the real world today with EV certificates, we’d know
> about them and they would be getting trumpeted in this thread.
>

Sure, but the CT-derived value doesn't require UI. Or, conversely, are you
saying that EV is only safe with CT, and if (and only if) sites are looking
for confusion?


> Why do police wear bulletproof vests when they know they’re entering a
> dangerous situation?  A vest only covers part of the body, so they’re still
> in danger.  I wouldn’t call a bulletproof vest a placebo.  It’s a layer of
> defense, just like EV.  I’m not claiming EV “solves” phishing but I am
> claiming that it mitigates it.
>

But it's an outsourced mitigation - the site operator is being convinced to
buy an EV cert by a CA, but the protection is only effective if the
technical controls work (they don't) or the user's are trained on the
business realities (which I would assert _no one_ is, given the
jurisdictional nuance)

It's not an apples to apples comparison - this isn't defense in depth.


> I guess I’m also having a hard time appreciating how the presence of this
> information is a “cost” to users who don’t care about it.  For one thing,
> it’s been there for years in all major browsers, so everyone has at least
> been conditioned to its presence already.  But how is someone who isn’t
> interested in the information in the first place being confused by it?  And
> if the mere presence of an organization name is creating confusion, then
> surely a URL with lots of words and funny characters in it would be
> confusing people too, and we should remove that too, right?
>
That has been proposed, yes. To some extent, that's what Safari's UI tries
to do, for what we can extrapolate as similar reasonings.

But yes, the complex state of indicators has ample (general) HCI research
supporting it, and even specific to the browser case (e.g.
https://research.google.com/pubs/pub45366.html in more modern times, or
http://www.usablesecurity.org/papers/jackson.pdf going further back). As
far as I'm aware, there has been zero peer-reviewed, academically sound
research demonstrating the value proposition of EV, just anecdata, while
there is a rather extant body showing the harm that complexity causes, both
individually (as earlier referenced) and as applied to connection security
indicators - particularly, positive indicators such as EV (where you must
note the absence of, rather than the presence)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Monday, December 11, 2017 at 6:01:25 PM UTC-6, Ryan Sleevi wrote:

> > Not really - what matters is that the user insists they got had via a
> > phishing link or other process - that can certainly be verified after the
> > fact
> 
> 
> No.

Why's that?  This is how investigations begin.

> 
> - did someone steal their money in a sketchy way, but with apparent user
> > authorization?  Further, the user swears back and forth that the green bar
> > was there and they looked to see that it matched the site's name - their
> > bank, PayPal, etc.
> 
> 
> All users will swear this if it avoids liability. And let’s be honest, it’s
> actively hostile to users to say they bear liability if they don’t do this
> - for every click of the page.

I'm confused.  I never suggested that the user's assertion that they saw this 
would cause any change in liability.  In fact, anyone responsible would explain 
that to the user even as the user tries to report it.  The value would be that 
if the certificate that managed to sway the user can be found and tracked down 
(which should be possible via CT), the possibility exists that the person(s) 
responsible for the deception may ultimately be caused to suffer for their 
deception.

Real world contracts, business relationships, and statutes define who is 
responsible for what frauds and torts arise in engaging in commerce whether 
online or not.  EV status is not about shifting liability.

A significant value that EV does provide (or at least have strong potential to 
provide) is the ability of a user to assess the EV certificate and its most 
essential contents as one additional factor to rely upon in the calculus of 
whether or not to assume the risk of entering certain confidential data into 
the website they are visiting.

> 
>  All EV certs are CT logged, find the cert or homograph from there, track
> > to issuer and validation details, chase the entity document path, etc.

One should presume that if the EV presented certificate confused the user who 
relied upon it into thinking they were dealing with a particular party, that 
the contents should contain sequences or homograph sequences that closely 
mirror what the real site would indicate.

> 
> And of course ignoring all the innocent bystanders along the way - such as
> Ian, who has not phished Stripe users.

Security research is legitimate.  The people who created these entities and got 
these certificates are innocent of any crime.  What they are not is immune from 
reasonable investigation to show this.  If someone suggested tomorrow that an 
EV certificate caused that person to believe that they were at the Stripe site, 
it would be entirely reasonable for any law enforcement agency or investigator 
to track down these researchers and ask them to explain why they sought 
certificates and entity creation that seem engineered to deceive.  The matter 
should resolve when they show legitimate cause.  This doesn't mean that they 
should be given a free pass and ignored, if subsequently, someone phishes a 
Stripe customer by way of a look-alike entity and cert.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Tim Shirley via dev-security-policy
I’m not looking for a guarantee.  Nothing is ever going to meet that standard.  
What I’m looking for is something that’s going to improve my odds.  What I see 
in Ian’s and James’s research is some ways that it’s possible to create 
confusion, accidentally or deliberately. But I haven’t heard of any real world 
cases where such deception was used deliberately to date.  And I’d expect, 
since Certificate Transparency has been required for a couple years now for EV 
treatment in Chrome, that if such attacks were actually happening in the real 
world today with EV certificates, we’d know about them and they would be 
getting trumpeted in this thread.

Why do police wear bulletproof vests when they know they’re entering a 
dangerous situation?  A vest only covers part of the body, so they’re still in 
danger.  I wouldn’t call a bulletproof vest a placebo.  It’s a layer of 
defense, just like EV.  I’m not claiming EV “solves” phishing but I am claiming 
that it mitigates it.

I guess I’m also having a hard time appreciating how the presence of this 
information is a “cost” to users who don’t care about it.  For one thing, it’s 
been there for years in all major browsers, so everyone has at least been 
conditioned to its presence already.  But how is someone who isn’t interested 
in the information in the first place being confused by it?  And if the mere 
presence of an organization name is creating confusion, then surely a URL with 
lots of words and funny characters in it would be confusing people too, and we 
should remove that too, right?

From: Ryan Sleevi 
Reply-To: "r...@sleevi.com" 
Date: Wednesday, December 13, 2017 at 2:01 PM
To: Tim Shirley 
Cc: "r...@sleevi.com" , Nick Lamb , 
"dev-security-policy@lists.mozilla.org" 
, Jakob Bohm 
Subject: Re: On the value of EV

Right, but both Ian and James' research show that it's an unreliable guarantee 
for those attacks - you may be relying on it, but it's not safe for it.

Further, the costs to support your use case - well-intentioned but perhaps not 
aligning with the pragmatic reality - affect users who don't do so or aren't 
conditioned, by adding further confusion into the nuances of jurisdictional 
incorporation.

So if it doesn't meet your intended use case / you're relying on a placebo, and 
it harms others, perhaps the UI treatment should go away :)

Note, my focus in all of this discussion has been about the expression of UI 
surface in the security-critical section of a browser, and specifically, asked 
for Mozillans to comment on their plans (which, of course, had everyone but 
them commenting). There may still be value in EV-as-a-validation, but EV as a 
phishing mitigation - your scam emails or such - are not solved by EV. 
Technically or via validation.

On Wed, Dec 13, 2017 at 1:52 PM, Tim Shirley 
> wrote:
I don’t dispute your claims if the attacker is ‘on the wire’; what I dispute is 
that that is actually the case most of the time.  I’d think a far more common 
case is one in which I receive an email, purportedly from my bank, but 
containing a URL that isn’t the one I recognize as my bank’s.  Usually that’s a 
scam, but sometimes it’s a legit separate domain they have for the credit card 
rewards program or something like that.  Or a case where I am typing a known 
URL and I fat-finger something and stumble onto a scammer’s site.  The 
immediate absence of the EV organization name is going to help me detect that 
I’m not where I want to be.

BTW, I looked at these things long before I was in the CA business, so if I was 
“conditioned” it must have been by the outside world.  ☺

From: Ryan Sleevi >
Reply-To: "r...@sleevi.com" 
>
Date: Wednesday, December 13, 2017 at 1:18 PM
To: Tim Shirley >
Cc: Nick Lamb >, 
"dev-security-policy@lists.mozilla.org"
 
>,
 Jakob Bohm >
Subject: Re: On the value of EV



On Wed, Dec 13, 2017 at 12:58 PM, Tim Shirley via dev-security-policy 
>
 wrote:
As an employee of a CA, I’m sure many here will dismiss my point of view as 
self-serving.  But when I am making trust decisions on the internet, I 
absolutely rely on both the URL and the organization information in the “green 
bar”.  I relied on it before I worked for a CA, and I’m pretty sure I’ll still 
rely on it after I no longer work in this industry (if such a thing is even 
possible, as some in the 

Re: On the value of EV

2017-12-13 Thread Gervase Markham via dev-security-policy
On 13/12/17 11:58, Tim Shirley wrote:
> So many of the arguments made here, such as this one, as well as the recent 
> demonstrations that helped start this thread, focus on edge cases.  And while 
> those are certainly valuable to consider, they obscure the fact that “Green 
> Bar” adds value in the mainstream use cases.  If we were talking about how to 
> improve EV, then by all means focus on the edge cases.  The thing I don’t see 
> in all this is a compelling argument to take away something that’s useful 
> most of the time.

My concern with this argument is that it's susceptible to the criticism
that Adam Langley made of revocation checking:
https://www.imperialviolet.org/2012/02/05/crlsets.html

"So [EV identity is] like a seat-belt that snaps when you crash. Even
though it works 99% of the time, it's worthless because it only works
when you don't need it."

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Gervase Markham via dev-security-policy
On 11/12/17 17:00, Ryan Sleevi wrote:
> Fundamentally, I think this is misleading. It presumes that, upon
> something bad happening, someone can link it back to that certificate
> to link it back to that identity. If I was phished, and entered my
> credentials, there's no reason to believe I've maintained the record
> details including the phishing link to know I was phished. Are users
> supposed to spleunk their HTTP cache or maintain complete archives of
> every link they visited, such that they can get the cert back from it
> to aid an investigation?

This is something that has always worried me about the EV value
proposition. Even if it worked perfectly, once one has realised one has
been scammed, one would want to find the cert again to know where to
serve the lawsuit papers or send the police. Unless your browser caches
all EV certs for sites you've ever visited in the past month, and
provides some UI for querying that cache, then that's not necessarily
going to be possible. So having the info about the site owner in the
cert isn't actually useful.

CT does address this to a degree, but only to a degree.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
On Wednesday, December 13, 2017 at 12:50:38 PM UTC-6, Ryan Sleevi wrote:
> On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman 
> wrote:
> 
> > As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> > happen assuming a stateful DPRNG with a decent starting entropy corpus.
> >
> 
> Agreed - but that's also true for the devices Tim is mentioning.

I do not mean this facetiously.  If I kept a diary, I might make a note.  I 
feel like I've accomplished something.

> 
> Which I guess is the point I was trying to make - if this can be 'fixed'
> relatively easily for the use case Tim was bringing up, what other use
> cases are there? The current policy serves a purpose, and although that
> purpose is not high in value nor technically rigorous, it serves as an
> external check.
> 
> And yes, I realize the profound irony in me making such a comment in this
> thread while simultaneously arguing against EV in a parallel thread, on the
> basis that the purpose EV serves is not high in value nor technically
> rigorous - but I am having trouble, unlike in the EV thread, understanding
> what harm is caused by the current policy, or what possible things that are
> beneficial are prevented.

I, for one, respect that you pointed out the dichotomy.  I think I understand 
it.

I believe that opening the door to ca-side key generation under specific terms 
and circumstances offers an opportunity for various consumers of PKI key pairs 
to acquire higher quality key pairs than a lot of the alternatives which would 
otherwise fill the void.

> 
> I don't think we'll see significant security benefit in some circumstances
> - I think we'll see the appearances of, but not the manifestation - so I'm
> trying to understand why we'd want to introduce that risk?

Sometime we accept one risk, under terms that we can audit and control, in 
order to avoid the risks which we can reasonably predict the rise of in a 
vacuum.  I am _not_ well qualified to weigh this particular set of risk 
exposures, most especially in the nature of the risk of an untrustworthy CA 
intentionally acting to cache these keys, etc.  I am well qualified to indicate 
that both risks exist.  I believe they should probably be weighed in the nature 
of a "this or that" dichotomy.

> 
> I also say this knowing how uninteroperable the existing key delivery
> mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
> protection of those are. Combine that with CAs repeated failure to
> correctly implement the specs that are less ambiguous, and I'm worried
> about a proliferation of private keys flying around - as some CAs do for

It _is_ absolutely essential that the question of secure transport and 
destruction be part of what is controlled for and monitored in a scheme where 
key generation by the CA is permitted.  The mechanism becomes worse than almost 
everything else if that falls apart.


> their other, non-TLS certificates. So I see a lot of potential harm in the
> ecosystem, and question the benefit, especially when, as you note, this can
> be mitigated rather significantly by developers not shoveling crap out the
> door. If developers who view "time to market" as more important than
> "Internet safety" can't get their toys, I ... don't lose much sleep.

Aside from the cryptography enthusiast or professional, it is hard to find 
developers with the right intersection of skill and interest to address the 
security implications.  It becomes complicated further when security 
implications aren't necessarily a business imperative.  Further complicated 
when the customer base realizes it has real costs and begins to question the 
value.  It's not just the developers.  The trend of good _looking_ quick 
reference designs lately is that they have a great spec sheet and take every 
imaginable short cut where the requirements are not explicitly stated and 
audited.  It's an ecosystem problem that is really hard to solve.

A couple of years ago, I and my team were doing interop testing between a 
device and one of our products.  In that course of events, we discovered a 
nasty security issue that was blatantly obvious to someone skilled in our 
particular application area.  We worked with the manufacturer to trace the 
product design back to a reference design from a Chinese ODM.  They were 
amenable to fixing the issue ultimately, but we found at least 14 affected 
distinct products in the marketplace based upon that design that did pull in 
those changes as of a year later.

Even as the line between hardware engineer and software developer get more and 
more blurred, there remains a stark division of skill set, knowledge base, and 
even understanding of each others' needs.  That's problematic.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
Right, but both Ian and James' research show that it's an unreliable
guarantee for those attacks - you may be relying on it, but it's not safe
for it.

Further, the costs to support your use case - well-intentioned but perhaps
not aligning with the pragmatic reality - affect users who don't do so or
aren't conditioned, by adding further confusion into the nuances of
jurisdictional incorporation.

So if it doesn't meet your intended use case / you're relying on a placebo,
and it harms others, perhaps the UI treatment should go away :)

Note, my focus in all of this discussion has been about the expression of
UI surface in the security-critical section of a browser, and specifically,
asked for Mozillans to comment on their plans (which, of course, had
everyone but them commenting). There may still be value in
EV-as-a-validation, but EV as a phishing mitigation - your scam emails or
such - are not solved by EV. Technically or via validation.

On Wed, Dec 13, 2017 at 1:52 PM, Tim Shirley  wrote:

> I don’t dispute your claims if the attacker is ‘on the wire’; what I
> dispute is that that is actually the case most of the time.  I’d think a
> far more common case is one in which I receive an email, purportedly from
> my bank, but containing a URL that isn’t the one I recognize as my bank’s.
> Usually that’s a scam, but sometimes it’s a legit separate domain they have
> for the credit card rewards program or something like that.  Or a case
> where I am typing a known URL and I fat-finger something and stumble onto a
> scammer’s site.  The immediate absence of the EV organization name is going
> to help me detect that I’m not where I want to be.
>
>
>
> BTW, I looked at these things long before I was in the CA business, so if
> I was “conditioned” it must have been by the outside world.  ☺
>
>
>
> *From: *Ryan Sleevi 
> *Reply-To: *"r...@sleevi.com" 
> *Date: *Wednesday, December 13, 2017 at 1:18 PM
> *To: *Tim Shirley 
> *Cc: *Nick Lamb , "dev-security-policy@lists.mozilla.org" <
> dev-security-policy@lists.mozilla.org>, Jakob Bohm 
> *Subject: *Re: On the value of EV
>
>
>
>
>
>
>
> On Wed, Dec 13, 2017 at 12:58 PM, Tim Shirley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> As an employee of a CA, I’m sure many here will dismiss my point of view
> as self-serving.  But when I am making trust decisions on the internet, I
> absolutely rely on both the URL and the organization information in the
> “green bar”.  I relied on it before I worked for a CA, and I’m pretty sure
> I’ll still rely on it after I no longer work in this industry (if such a
> thing is even possible, as some in the industry have assured me it’s not).
>
>
>
> I think the focus on the edge cases has been because even the case you
> raise here (and below), can be demonstrated as technically flawed.
>
>
>
> You believe you're approaching a sense of security, but under an
> adversarial model, it falls apart.
>
>
>
> The historic focus has been on the technical adversary - see Nick Lamb's
> recently reply a few minutes before yours - and that's been thoroughly
> shown that EV is insufficient under an attacker model that is 'on the
> wire'. However, EV proponents have still argued for EV, by suggesting that
> even if its insufficient for network adversaries, it's sufficient for
> organizational adversaries. Ian's and James' research shows that's also
> misguided.
>
>
>
> So you're not wrong that, as a technically skilled user, and as an
> employee of a CA, you've come to a conclusion that EV has value, and
> conditioned yourself to look for that value being expressed. But under both
> adversarial models relative to the value EV provides, EV does not address
> them. So what does the UI provide, then, if it cannot provide either
> technical enforcement or "mental-model" safety.
>
>
>
> Are you wrong for wanting those things? No, absolutely not. They're
> perfectly reasonable to want. But both the technical means of expressing
> that (the certificate) and the way to display that to the user (the UI
> bar), neither of these hold up to rigor. They serve as placebo rather than
> panacea, as tiger repelling rocks rather than real protections.
>
>
>
> Since improving it as a technical means is an effective non-starter (e.g.
> introducing a new origin for only EV certs), the only fallback is to the
> cognitive means - and while users such as yourself may know the
> jurisdictional details for all the sites they interact with, and may have a
> compelling desire for such information, that doesn't necessarily mean it
> should be exposed to millions of users. Firefox has about:config, for
> example - as well as extensions - and both of those could provide
> alternative avenues with much greater simplicity for the common user.
>
___
dev-security-policy mailing list

Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy

> As an unrelated but funny aside, I once heard about a expensive, high 
> assurance device with a embedded bi-stable circuit for producing high quality 
> hardware random numbers.  As part of a rigorous validation and review process 
> in order to guarantee product quality, the instability was noticed and 
> corrected late in the development process, and final testing showed that the 
> output of the key generator was completely free of any pesky one bits that 
> might interfere with the purity of all zero keys.
> 

More perniciously, an excellent PRNG algorithm will "whiten" sufficiently that 
the standard statistical tests will not be able to distinguish the random 
output stream as completely lacking in seed entropy.

I believe the CC EAL target evaluations standards require that during the 
testing a mode be enabled to access the raw uncleaned, 
pre-algorithmic-balancing, values so that tests can be incorporated to check 
the raw entropy source for that issue.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Tim Shirley via dev-security-policy
I don’t dispute your claims if the attacker is ‘on the wire’; what I dispute is 
that that is actually the case most of the time.  I’d think a far more common 
case is one in which I receive an email, purportedly from my bank, but 
containing a URL that isn’t the one I recognize as my bank’s.  Usually that’s a 
scam, but sometimes it’s a legit separate domain they have for the credit card 
rewards program or something like that.  Or a case where I am typing a known 
URL and I fat-finger something and stumble onto a scammer’s site.  The 
immediate absence of the EV organization name is going to help me detect that 
I’m not where I want to be.

BTW, I looked at these things long before I was in the CA business, so if I was 
“conditioned” it must have been by the outside world.  ☺

From: Ryan Sleevi 
Reply-To: "r...@sleevi.com" 
Date: Wednesday, December 13, 2017 at 1:18 PM
To: Tim Shirley 
Cc: Nick Lamb , "dev-security-policy@lists.mozilla.org" 
, Jakob Bohm 
Subject: Re: On the value of EV



On Wed, Dec 13, 2017 at 12:58 PM, Tim Shirley via dev-security-policy 
>
 wrote:
As an employee of a CA, I’m sure many here will dismiss my point of view as 
self-serving.  But when I am making trust decisions on the internet, I 
absolutely rely on both the URL and the organization information in the “green 
bar”.  I relied on it before I worked for a CA, and I’m pretty sure I’ll still 
rely on it after I no longer work in this industry (if such a thing is even 
possible, as some in the industry have assured me it’s not).

I think the focus on the edge cases has been because even the case you raise 
here (and below), can be demonstrated as technically flawed.

You believe you're approaching a sense of security, but under an adversarial 
model, it falls apart.

The historic focus has been on the technical adversary - see Nick Lamb's 
recently reply a few minutes before yours - and that's been thoroughly shown 
that EV is insufficient under an attacker model that is 'on the wire'. However, 
EV proponents have still argued for EV, by suggesting that even if its 
insufficient for network adversaries, it's sufficient for organizational 
adversaries. Ian's and James' research shows that's also misguided.

So you're not wrong that, as a technically skilled user, and as an employee of 
a CA, you've come to a conclusion that EV has value, and conditioned yourself 
to look for that value being expressed. But under both adversarial models 
relative to the value EV provides, EV does not address them. So what does the 
UI provide, then, if it cannot provide either technical enforcement or 
"mental-model" safety.

Are you wrong for wanting those things? No, absolutely not. They're perfectly 
reasonable to want. But both the technical means of expressing that (the 
certificate) and the way to display that to the user (the UI bar), neither of 
these hold up to rigor. They serve as placebo rather than panacea, as tiger 
repelling rocks rather than real protections.

Since improving it as a technical means is an effective non-starter (e.g. 
introducing a new origin for only EV certs), the only fallback is to the 
cognitive means - and while users such as yourself may know the jurisdictional 
details for all the sites they interact with, and may have a compelling desire 
for such information, that doesn't necessarily mean it should be exposed to 
millions of users. Firefox has about:config, for example - as well as 
extensions - and both of those could provide alternative avenues with much 
greater simplicity for the common user.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman 
wrote:

> As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> happen assuming a stateful DPRNG with a decent starting entropy corpus.
>

Agreed - but that's also true for the devices Tim is mentioning.

Which I guess is the point I was trying to make - if this can be 'fixed'
relatively easily for the use case Tim was bringing up, what other use
cases are there? The current policy serves a purpose, and although that
purpose is not high in value nor technically rigorous, it serves as an
external check.

And yes, I realize the profound irony in me making such a comment in this
thread while simultaneously arguing against EV in a parallel thread, on the
basis that the purpose EV serves is not high in value nor technically
rigorous - but I am having trouble, unlike in the EV thread, understanding
what harm is caused by the current policy, or what possible things that are
beneficial are prevented.

I don't think we'll see significant security benefit in some circumstances
- I think we'll see the appearances of, but not the manifestation - so I'm
trying to understand why we'd want to introduce that risk?

I also say this knowing how uninteroperable the existing key delivery
mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
protection of those are. Combine that with CAs repeated failure to
correctly implement the specs that are less ambiguous, and I'm worried
about a proliferation of private keys flying around - as some CAs do for
their other, non-TLS certificates. So I see a lot of potential harm in the
ecosystem, and question the benefit, especially when, as you note, this can
be mitigated rather significantly by developers not shoveling crap out the
door. If developers who view "time to market" as more important than
"Internet safety" can't get their toys, I ... don't lose much sleep.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 1:19 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> I would be sorely disappointed


Prepare to be sorely disappointed


> and consider it a security bug


It is not a bug. It is not part of the security boundary of the Web, thus
WontFix/WorkingAsIntended. Feature Requests to change this behaviour will
be closed, with reference to "Beware Finer Grained Origins", which explains
the flaws in that.


> if a
> browser shows one validated certificate then submits a posted form to a
> connection with a substantially different certificate.


This is what browsers do. As referenced earlier in the discussion about
Same Origin Policy, and more generally, the security notion of origins - as
these two certificates do not constitute distinct origins, they are the
same in privilege, capability, and trust. That is how the Web works. This
has been mentioned several times, but I'm greatly appreciative for Nick
spelling it out, as it does seem some degree of progress has been made in
arriving at common talking points.


> There may be a
> (very short) list of permitted variations for cases such as server farms
> with separate private keys per server.  But any real change of
> certificate mid-transaction should be blocked the same way cross-domain
> posting is usually blocked.
>

They are not blocked. This is also covered in the SOP and how the web works.


> Checking for certificate equality is an easy programmatic task, deciding
> if a real world entity is trustworthy is not.


Unquestionably. Yet using certificates to do so is both technically and
procedurally deficient.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy
So ECHDE is an interesting point that I had not considered, but as Matt noted, 
the quality of randomness in the devices does generally improve with time.  It 
tends to be the initial bootstrapping where things go horribly wrong.

 

A couple years ago I was actually on the opposite side of this issue, so it’s 
very easy for me to see both sides.  I just don’t see it as useful to 
categorically rule out something that can provide a significant security 
benefit in some circumstances.

 

-Tim

 

As an unrelated but funny aside, I once heard about a expensive, high assurance 
device with a embedded bi-stable circuit for producing high quality hardware 
random numbers.  As part of a rigorous validation and review process in order 
to guarantee product quality, the instability was noticed and corrected late in 
the development process, and final testing showed that the output of the key 
generator was completely free of any pesky one bits that might interfere with 
the purity of all zero keys.

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Wednesday, December 13, 2017 11:11 AM
To: Tim Hollebeek 
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

Tim,

 

I appreciate your reply, but that seems to be backwards looking rather than 
forwards looking. That is, it looks and assumes static-RSA ciphersuites are 
acceptable, and thus the entropy risk to TLS is mitigated by client-random to 
this terrible TLS-server devices, and the issue to mitigate is the poor entropy 
on the server.

 

However, I don't think that aligns with what I was mentioning - that is, the 
expectation going forward of the use of forward-secure cryptography and 
ephemeral key exchanges, which do become more relevant to the quality of 
entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key 
construction does not meaningfully improve the security of Mozilla users.

 

I'm curious whether any use case can be brought forward that isn't "So that we 
can aid and support the proliferation of insecure devices into users everyday 
lives" - as surely that doesn't seem like a good outcome, both for Mozilla 
users and for society at large. Nor do I think the propose changes meaningfully 
mitigate the harm caused by them, despite the well-meaning attempt to do so.

 

On Wed, Dec 13, 2017 at 12:40 PM, Tim Hollebeek via dev-security-policy 
 > wrote:

As I’m sure you’re aware, RSA key generation is far, far more reliant on the 
quality of the random number generation and the prime selection algorithm than 
TLS is dependent on randomness.  In fact it’s the combination of poor 
randomness with attempts to reduce the cost of RSA key generation that has and 
will continue to cause problems.



While the number of bits in the key pair is an important security parameter, 
the number of potential primes and their distribution has historically not 
gotten as much attention as it should.  This is why there have been a number of 
high profile breaches due to poor RSA key generation, but as far as I know, no 
known attacks due to the use of randomness elsewhere in the TLS protocol.  This 
is because TLS, like most secure protocols, has enough of gap between secure 
and insecure that small deviations from ideal behavior don’t break the entire 
protocol.  RSA has a well-earned reputation for finickiness and fragility.



It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to 
it, given that if any two key pairs share a prime number, it’s just a matter of 
time before someone uses Euclid’s algorithm in order to find it.  There are 
PLENTY of possible primes of the appropriate size so that this should never 
happen, but it’s been seen to happen.  I would be shocked if we’ve seen the 
last major security breach based on poor RSA key generation by resource 
constrained devices.



Given that there exist IETF approved alternatives that could help with that 
problem, they’re worth considering.  I’ve been spending a lot of time recently 
looking at the state of the IoT world, and it’s not good.



-Tim



From: Ryan Sleevi [mailto:r...@sleevi.com  ]
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek  >
Cc: mozilla-dev-security-pol...@lists.mozilla.org 
 
Subject: Re: CA generated keys








On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy 
  
 > > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a 

Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
> I appreciate your reply, but that seems to be backwards looking rather than
> forwards looking. That is, it looks and assumes static-RSA ciphersuites are
> acceptable, and thus the entropy risk to TLS is mitigated by client-random
> to this terrible TLS-server devices, and the issue to mitigate is the poor
> entropy on the server.
>
> However, I don't think that aligns with what I was mentioning - that is,
> the expectation going forward of the use of forward-secure cryptography and
> ephemeral key exchanges, which do become more relevant to the quality of
> entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key
> construction does not meaningfully improve the security of Mozilla users.
>

As I pointed out, it can be demonstrated that quality ECDHE exchanges can
happen assuming a stateful DPRNG with a decent starting entropy corpus.

Beyond that, I should point out, I'm not talking about legacy devices
already in market.  I'm not sure the community fully understands how much
hot-off-the-presses stuff (at least the stuff in the cheap, and so selected
by the marketplace) is really really set up for failure in terms of
security.

What I want to emphasize is that I don't believe policy here will make
things better.  In fact, there are real dangers that it gets worse.

It would be an egregiously bad decision -- but in the eyes of a budding
device software stack developer -- to just implement an RSA key pair
generation alg in Javascript and rely upon the browser to build the key set
that will form the raw private key and the CSR.  That's definitely not
secure or better.

Assuming you lock javascript to the point that large integer primitives and
operations are unavailable outside secure mode, these people will just
stand up an HTTP endpoint that spits out newly generated random RSA or EC
key pair to feed to the device.  And it'll be unsigned and not even
protected by HTTPS, unless required, and then they'll do the bare minimum.

The device reference design space is improving and is becoming more
security conscious but you're YEARS away from anything resembling best
practice.  I just don't believe anything Mozilla or anyone else outside
that world can do will speed it along.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Jakob Bohm via dev-security-policy

On 13/12/2017 18:38, Nick Lamb wrote:

On Wed, 13 Dec 2017 12:29:40 +0100
Jakob Bohm via dev-security-policy
 wrote:


What is *programmatically* enforced is too little for human safety.
believing that computers can replace human judgement is a big mistake.
Most of the world knows this.


That's a massive and probably insurmountable problem then since the
design of HTTPS in particular and the way web browsers are normally
used is _only_ compatible with programmatic enforcement.

Allow me to illustrate:


Suppose you visit your bank's web site. There is a lovely "Green
Bar" EV certificate, and you, as a vocal enthusiast for the value of
Extended Validation, examine this certificate in considerable detail,
verifying that the business identified by the certificate is indeed
your bank. You are doubtless proud that this capability was available
to you.


You fill in your username and password and press "Submit". What happens?


Maybe your web browser finds that the connection it had before to
the bank's web site has gone, maybe it timed out, or there was a
transient network problem or a million other things. But no worry, you
don't run a web browser in order to be bothered with technical minutiae
- the browser will just make a new connection. This sort of thing
happens all the time without any trouble.

This new connection involves a fresh TLS setup, the server and browser
must begin again, the server will present its certificate to establish
identity. The web browser examines this certificate programmatically to
decide that it's OK, and if it is, the HTTPS form POST operation for
the log in form is completed by sending your username and password over
the new TLS connection.


You did NOT get to examine this certificate. Maybe it's the same one as
before, maybe it's slightly different, maybe completely different, the
hardware (let alone software) answering needn't be the same as last
time and the certificate needn't have any EV data in it. Your web
browser was happy with it, so that's where your bank username and
password were sent.



I would be sorely disappointed and consider it a security bug if a
browser shows one validated certificate then submits a posted form to a
connection with a substantially different certificate.  There may be a
(very short) list of permitted variations for cases such as server farms
with separate private keys per server.  But any real change of
certificate mid-transaction should be blocked the same way cross-domain
posting is usually blocked.

Checking for certificate equality is an easy programmatic task, deciding
if a real world entity is trustworthy is not.


Even IF you decide now, with the new connection, that you don't trust
this certificate, it's too late. Your credentials were already
delivered to whoever had that certificate.



Software makes these trust decisions constantly, they take only the
blink of an eye, and require no human attention, so we can safely build
a world that requires millions of them. The moment you demand human
attention, you not only introduce lots of failure modes, you also use
up a very limited resource.

Perhaps you feel that when browsing the web you make a conscious
decision about trust for each site you visit. Maybe, if you are
extraordinarily cautious, you make the decision for individual web
pages. Alas, to be of any use the decisions must be taken for every
single HTTP operation, and most pages will use dozens (some hundreds)
of such operations.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 12:58 PM, Tim Shirley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As an employee of a CA, I’m sure many here will dismiss my point of view
> as self-serving.  But when I am making trust decisions on the internet, I
> absolutely rely on both the URL and the organization information in the
> “green bar”.  I relied on it before I worked for a CA, and I’m pretty sure
> I’ll still rely on it after I no longer work in this industry (if such a
> thing is even possible, as some in the industry have assured me it’s not).
>

I think the focus on the edge cases has been because even the case you
raise here (and below), can be demonstrated as technically flawed.

You believe you're approaching a sense of security, but under an
adversarial model, it falls apart.

The historic focus has been on the technical adversary - see Nick Lamb's
recently reply a few minutes before yours - and that's been thoroughly
shown that EV is insufficient under an attacker model that is 'on the
wire'. However, EV proponents have still argued for EV, by suggesting that
even if its insufficient for network adversaries, it's sufficient for
organizational adversaries. Ian's and James' research shows that's also
misguided.

So you're not wrong that, as a technically skilled user, and as an employee
of a CA, you've come to a conclusion that EV has value, and conditioned
yourself to look for that value being expressed. But under both adversarial
models relative to the value EV provides, EV does not address them. So what
does the UI provide, then, if it cannot provide either technical
enforcement or "mental-model" safety.

Are you wrong for wanting those things? No, absolutely not. They're
perfectly reasonable to want. But both the technical means of expressing
that (the certificate) and the way to display that to the user (the UI
bar), neither of these hold up to rigor. They serve as placebo rather than
panacea, as tiger repelling rocks rather than real protections.

Since improving it as a technical means is an effective non-starter (e.g.
introducing a new origin for only EV certs), the only fallback is to the
cognitive means - and while users such as yourself may know the
jurisdictional details for all the sites they interact with, and may have a
compelling desire for such information, that doesn't necessarily mean it
should be exposed to millions of users. Firefox has about:config, for
example - as well as extensions - and both of those could provide
alternative avenues with much greater simplicity for the common user.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Ryan Sleevi via dev-security-policy
Tim,

I appreciate your reply, but that seems to be backwards looking rather than
forwards looking. That is, it looks and assumes static-RSA ciphersuites are
acceptable, and thus the entropy risk to TLS is mitigated by client-random
to this terrible TLS-server devices, and the issue to mitigate is the poor
entropy on the server.

However, I don't think that aligns with what I was mentioning - that is,
the expectation going forward of the use of forward-secure cryptography and
ephemeral key exchanges, which do become more relevant to the quality of
entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key
construction does not meaningfully improve the security of Mozilla users.

I'm curious whether any use case can be brought forward that isn't "So that
we can aid and support the proliferation of insecure devices into users
everyday lives" - as surely that doesn't seem like a good outcome, both for
Mozilla users and for society at large. Nor do I think the propose changes
meaningfully mitigate the harm caused by them, despite the well-meaning
attempt to do so.

On Wed, Dec 13, 2017 at 12:40 PM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As I’m sure you’re aware, RSA key generation is far, far more reliant on
> the quality of the random number generation and the prime selection
> algorithm than TLS is dependent on randomness.  In fact it’s the
> combination of poor randomness with attempts to reduce the cost of RSA key
> generation that has and will continue to cause problems.
>
>
>
> While the number of bits in the key pair is an important security
> parameter, the number of potential primes and their distribution has
> historically not gotten as much attention as it should.  This is why there
> have been a number of high profile breaches due to poor RSA key generation,
> but as far as I know, no known attacks due to the use of randomness
> elsewhere in the TLS protocol.  This is because TLS, like most secure
> protocols, has enough of gap between secure and insecure that small
> deviations from ideal behavior don’t break the entire protocol.  RSA has a
> well-earned reputation for finickiness and fragility.
>
>
>
> It doesn’t help that RSA key generation has a sort of birthday paradoxy
> feel to it, given that if any two key pairs share a prime number, it’s just
> a matter of time before someone uses Euclid’s algorithm in order to find
> it.  There are PLENTY of possible primes of the appropriate size so that
> this should never happen, but it’s been seen to happen.  I would be shocked
> if we’ve seen the last major security breach based on poor RSA key
> generation by resource constrained devices.
>
>
>
> Given that there exist IETF approved alternatives that could help with
> that problem, they’re worth considering.  I’ve been spending a lot of time
> recently looking at the state of the IoT world, and it’s not good.
>
>
>
> -Tim
>
>
>
> From: Ryan Sleevi [mailto:r...@sleevi.com]
> Sent: Wednesday, December 13, 2017 9:52 AM
> To: Tim Hollebeek 
> Cc: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: CA generated keys
>
>
>
>
>
>
>
> On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
> dev-security-policy@lists.mozilla.org  lists.mozilla.org> > wrote:
>
>
> Wayne,
>
> For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> certificate
> at the same time should be allowed, and I have no problem with a
> requirement
> to delete the key after delivery.  I also think server side generation
> along
> the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC
> 7030
> is about client certificates, but in a world with lots of tiny
> communicating
> devices that interface with people via web browsers, there are lots of
> highly
> resource constrained devices with poor access to randomness out there
> running
> web servers.  And I think we are heading quickly towards that world.
> Tightening up the requirements to allow specific, approved mechanisms is
> fine.
> We don't want people doing random things that might not be secure.
>
>
>
> Tim,
>
>
>
> I'm afraid that the use case to justify this change seems to be inherently
> flawed and insecure. I'm hoping you can correct my misunderstanding, if I
> am doing so.
>
>
>
> As I understand it, the motivation for this is to support devices with
> insecure random number generators that might be otherwise incapable of
> generating secure keys. The logic goes that by having the CAs generate
> these keys, we end up with better security - fewer keys leaking.
>
>
>
> Yet I would challenge that assertion, and instead suggest that CAs
> generating keys for these devices inherently makes the system less secure.
> As you know, CAs are already on the hook to evaluate keys against known
> weak sets and reject them. There is absent a formal definition of this in
> the BRs, other than calling out 

Re: CA generated keys

2017-12-13 Thread Matthew Hardeman via dev-security-policy
In principle, I support Mr. Sleevi's position, practically I lean toward
Mr. Thayer's and Mr. Hollebeek's position.

Sitting on my desk are not less than 3 reference designs.  At least two of
them have decent hardware RNG capabilities.  What's noteworthy is the
garbage software stack, kernel support, etc. for that hardware.  The FAEs
for these run the gamut from "I'm ashamed of the reference software we're
crippling this fantastic design with" all the way to "Here, just use this
library for random." (It's a PRNG with a static seed.  And a bad PRNG alg
at that.)

Access to this kind of hardware requires a devil's bargain in which you
sign away your right to detail these kinds of things.  That's the case here.

What I can say is that fresh new reference designs being incorporated into
consumer products today certainly don't make things any easier for anyone
hurrying to bring a product to market. At least not if they want security.

Having said that, some practical thoughts:

It's mostly a linux kernel universe on these devices.  Even in cases where
the kernel isn't plumbed through to the hw rng generation, as a function of
increasing run-time and actual sporadic use, the entropy pool improves to a
tolerable level with time.  The trouble arises from the fact that key
generation tends to be a new device setup and on boarding procedure and
thus executes in a predictable manner on a pretty precisely predictable
timing.  Thus, these keys tend to be generated before a sufficient entropy
pool exists.

Regarding the security of a device with poor original entropy and its
appropriateness in TLS, I would point out that a deterministic
pseudo-random number generator is perfectly acceptable for cryptographic
purposes as long as there is a sufficient initial random seed.  In the
absence of a better source, the limited entropic data that is available
could be combined with a value deterministically derived from, for example,
the well engineered generated-off-device private key.  This can all be
trivially implemented in user space and by developers with less familiarity
with interacting with proprietary devices on various hardware busses,
special random generation processor opcodes, etc, etc.

It is naive to believe that you will timely become aware of the various
permutations of the weak keys.  It is naive to believe that policy making
it hard to get certificates for those devices will cause those devices to
be timely replaced.

These SCADA devices caught up in the ROCA mess - did they actually replace
those devices, update the software with an off-platform key generator, or
just front them with a reverse proxy?  I'm betting it was the second or
third of those options.  And that's for professional gear deployed in
presumably large commercial environments.



On Wed, Dec 13, 2017 at 10:52 AM, Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> >
> > Wayne,
> >
> > For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> > certificate
> > at the same time should be allowed, and I have no problem with a
> > requirement
> > to delete the key after delivery.  I also think server side generation
> > along
> > the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC
> > 7030
> > is about client certificates, but in a world with lots of tiny
> > communicating
> > devices that interface with people via web browsers, there are lots of
> > highly
> > resource constrained devices with poor access to randomness out there
> > running
> > web servers.  And I think we are heading quickly towards that world.
> > Tightening up the requirements to allow specific, approved mechanisms is
> > fine.
> > We don't want people doing random things that might not be secure.
> >
>
> Tim,
>
> I'm afraid that the use case to justify this change seems to be inherently
> flawed and insecure. I'm hoping you can correct my misunderstanding, if I
> am doing so.
>
> As I understand it, the motivation for this is to support devices with
> insecure random number generators that might be otherwise incapable of
> generating secure keys. The logic goes that by having the CAs generate
> these keys, we end up with better security - fewer keys leaking.
>
> Yet I would challenge that assertion, and instead suggest that CAs
> generating keys for these devices inherently makes the system less secure.
> As you know, CAs are already on the hook to evaluate keys against known
> weak sets and reject them. There is absent a formal definition of this in
> the BRs, other than calling out illustrative examples such as
> Debian-generated keys (which share the flaw you mention), or, in more
> recent discussions, the ROCA-affected keys. Or, for the academic take,
> https://factorable.net/weakkeys12.extended.pdf , or the research at
> https://crocs.fi.muni.cz/public/papers/usenix2016 that 

Re: On the value of EV

2017-12-13 Thread Tim Shirley via dev-security-policy
So many of the arguments made here, such as this one, as well as the recent 
demonstrations that helped start this thread, focus on edge cases.  And while 
those are certainly valuable to consider, they obscure the fact that “Green 
Bar” adds value in the mainstream use cases.  If we were talking about how to 
improve EV, then by all means focus on the edge cases.  The thing I don’t see 
in all this is a compelling argument to take away something that’s useful most 
of the time.

As an employee of a CA, I’m sure many here will dismiss my point of view as 
self-serving.  But when I am making trust decisions on the internet, I 
absolutely rely on both the URL and the organization information in the “green 
bar”.  I relied on it before I worked for a CA, and I’m pretty sure I’ll still 
rely on it after I no longer work in this industry (if such a thing is even 
possible, as some in the industry have assured me it’s not).

Sure, I don’t pay attention if I’m just reading the news or something.  But 
before I enter credentials or credit card info into a web page, I absolutely 
look at both the URL and the organization name to see if they match my 
expectations.  If the company name shown is not what I expected or if it’s 
absent altogether, that’s a red flag to me to either do a little more research 
before proceeding, or abandon it altogether.  I agree, James & Ian’s 
demonstrations show cases where the information presented was not effective for 
the end user.  But it seems an incredible leap to me to go from a couple of 
demonstrated shortcomings to suggesting outright removal of something that is 
useful most of the time.  It also seems that if you follow that line of 
thinking, you have to also advocate for removing the URL from display.  If 
“Identity Verified” as a company name is going to confuse some people into 
trusting the site, then couldn’t I also confuse many of the same people by 
registering “identity-verified.com” or some variant?

I don’t claim to speak for anyone but myself as a web user here.  I probably 
view a web site with more suspicion than most of the general public, as a 
result of the nature of my work.  The majority of users are probably going to 
make their trust decisions purely based on whether or not the browser jumps in 
with an interstitial warning them that it’s a known malicious site.  Absent 
that, they’re going to trust that if the page has Megabank’s logo on it, then 
it’s really Megabank.  While I appreciate the value the malicious site filters 
are providing me, they can’t know about every bad site, and I’m not willing to 
fully outsource my trust decisions to them.  Safari’s decision to hide the URL 
and only display the organization name on a site with an EV cert is a 
deal-killer to me using it, because it’s taking away information I rely on.  
Similarly, if Firefox were to remove the EV indicator, that would be more than 
enough reason for me to switch to another browser that still had it.  Of course 
a scenario like Nick describes could happen to subvert my decision.  Of course 
I might make a human mistake in interpreting the displayed organization name in 
a particular instance.  But what I am confident of is, in the totality of my 
web usage, my credentials / credit card / whatever will be sent to wrong people 
less times if you give me that information than if you hide it from me.


On 12/13/17, 12:38 PM, "dev-security-policy on behalf of Nick Lamb via 
dev-security-policy" 
 wrote:

On Wed, 13 Dec 2017 12:29:40 +0100
Jakob Bohm via dev-security-policy
 wrote:

> What is *programmatically* enforced is too little for human safety.
> believing that computers can replace human judgement is a big mistake.
> Most of the world knows this.

That's a massive and probably insurmountable problem then since the
design of HTTPS in particular and the way web browsers are normally
used is _only_ compatible with programmatic enforcement.

Allow me to illustrate:


Suppose you visit your bank's web site. There is a lovely "Green
Bar" EV certificate, and you, as a vocal enthusiast for the value of
Extended Validation, examine this certificate in considerable detail,
verifying that the business identified by the certificate is indeed
your bank. You are doubtless proud that this capability was available
to you.


You fill in your username and password and press "Submit". What happens?


Maybe your web browser finds that the connection it had before to
the bank's web site has gone, maybe it timed out, or there was a
transient network problem or a million other things. But no worry, you
don't run a web browser in order to be bothered with technical minutiae
- the browser will just make a new 

Re: On the value of EV

2017-12-13 Thread Jakob Bohm via dev-security-policy


I have been trying very hard to engage at the substance, but you keep
misunderstanding my statements and then answering that strawman.

So lets reiterate:

- I do not suggest assigning *liability* to the user.

- I do suggest *helping the user* make informed decisions of the kind
 that humans traditionally make in the offline world.  Decisions such as
 "does this look like a safe place to eat?".  "Does something look wrong
 about this?".

- I suggest that users *want to know* who the *real world entity* behind
 a website is before doing certain things in relationship to that real
 world entity (possibly through the website, possibly not).

- I suggest that the EV UI provides *useful information* to human users
 deciding if they want to interact with a real world entity.

- I suggest that EV certificates *do provide the warranties* listed in
 section 7.1 of the EV guidelines.

- I suggest that the exclusions in section 2.1.3 of the EV guidelines
 simply mean the CA does not judge or police companies, *only check
 their identities*. This does not contradict the section 7.1 warranties.

- I suggest that statistics about how little users understand the EV
 user interface and ecosystem do not provide any information about the
 practical usefulness of what little they do understand.  I do not claim
 to have statistics about that usefulness, which can only be measured
 from comparing real world events that are sufficiently similar, or by
 very carefully conducted behavioral experiments (not to be confused
 with A/B experiments on unwilling participants).




On 13/12/2017 13:39, Ryan Sleevi wrote:

On Wed, Dec 13, 2017 at 6:29 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:




Yes. This is the foundation and limit of Web Security.

https://en.wikipedia.org/wiki/Same-origin_policy

This is what is programatically enforced. Anything else either requires

new

technology to technically enforce it (such as a new scheme), or is
offloading the liability to the user.



What is *programmatically* enforced is too little for human safety.
believing that computers can replace human judgement is a big mistake.
Most of the world knows this.



That is a misguided and inaccurate rephrasing.

However, it still shows that you are fundamentally taking the view point
that:
1) Users should be responsible and bear the liability (straight up user
hostile)
2) This information is as critical as the one piece of truly guarantees
information, the URL (it isn’t)
3) It is a usable solution to require the visual determination as to
whether a given piece of information is present - that is, a positive
indicator (where both general studies AND browser specific studies show
this doesn’t work)

You aren’t adding to this, you’re simply phrasing your view that this
information is valuable. You haven’t responded to these points as to the
user experience, or the research, but instead theorize about how it should
be, or power users, or user education, all while ignoring the substance of
these realities.



You need to understand that not every trust begins and ends with a
Google search for a URL.



You need to understand that EV specifically states it is not for this
purpose. As already provided to you from the EVGs.



Sometimes people buy cheaper items online and just need to know that
their credit card transaction is not visible to a random company (hence
the common practice of outsourcing the entry of card details to a
reputable clearing service that promises not to hand the credit card
number back to the seller).



EV does not provide this. This is just a basic understand of the technology.

Sometimes people make bigger purchases and

need the assurance that there is a real company at the other end, which
can (if necessary) be sued for non-delivery.



EV EXPLICITLY does not provide this. Read the EVGs.

Sometimes people make

really big transactions and need to know that they are dealing with a
real world entity that they have a real world trust relationship with.



EV EXPLICITLY does not provide this. Read the EVGs.

I have been copying the example name from message to message, with noone

objecting.  Saving up this mistake for use as ammunition when you run
out of arguments is not a nice way to argue.



Getting upset doesn’t undermine the fact that you’ve continued to make
mistakes that have already been addressed in both the original research and
past replies to you. The discussion has not been moved forward by the
points you’ve raised, because they’ve already been shown to be logically or
factually flawed and unsupported. I do hope that you will revisit these and
see how the points you’ve raised - even in this very message - are already
disputed by the research, design, and technology.


The remainder of your argument basically boils down to "But Banks already

are offloading the liability to users when they say check for the green
bar" (and that is bad, user hostile, and unsustainable), and 

RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy
As I’m sure you’re aware, RSA key generation is far, far more reliant on the 
quality of the random number generation and the prime selection algorithm than 
TLS is dependent on randomness.  In fact it’s the combination of poor 
randomness with attempts to reduce the cost of RSA key generation that has and 
will continue to cause problems.

 

While the number of bits in the key pair is an important security parameter, 
the number of potential primes and their distribution has historically not 
gotten as much attention as it should.  This is why there have been a number of 
high profile breaches due to poor RSA key generation, but as far as I know, no 
known attacks due to the use of randomness elsewhere in the TLS protocol.  This 
is because TLS, like most secure protocols, has enough of gap between secure 
and insecure that small deviations from ideal behavior don’t break the entire 
protocol.  RSA has a well-earned reputation for finickiness and fragility.

 

It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to 
it, given that if any two key pairs share a prime number, it’s just a matter of 
time before someone uses Euclid’s algorithm in order to find it.  There are 
PLENTY of possible primes of the appropriate size so that this should never 
happen, but it’s been seen to happen.  I would be shocked if we’ve seen the 
last major security breach based on poor RSA key generation by resource 
constrained devices.

 

Given that there exist IETF approved alternatives that could help with that 
problem, they’re worth considering.  I’ve been spending a lot of time recently 
looking at the state of the IoT world, and it’s not good.

 

-Tim

 

From: Ryan Sleevi [mailto:r...@sleevi.com] 
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA generated keys

 

 

 

On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy 
 > wrote:


Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a requirement
to delete the key after delivery.  I also think server side generation along
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030
is about client certificates, but in a world with lots of tiny communicating
devices that interface with people via web browsers, there are lots of highly
resource constrained devices with poor access to randomness out there running
web servers.  And I think we are heading quickly towards that world.
Tightening up the requirements to allow specific, approved mechanisms is fine.
We don't want people doing random things that might not be secure.

 

Tim,

 

I'm afraid that the use case to justify this change seems to be inherently 
flawed and insecure. I'm hoping you can correct my misunderstanding, if I am 
doing so.

 

As I understand it, the motivation for this is to support devices with insecure 
random number generators that might be otherwise incapable of generating secure 
keys. The logic goes that by having the CAs generate these keys, we end up with 
better security - fewer keys leaking.

 

Yet I would challenge that assertion, and instead suggest that CAs generating 
keys for these devices inherently makes the system less secure. As you know, 
CAs are already on the hook to evaluate keys against known weak sets and reject 
them. There is absent a formal definition of this in the BRs, other than 
calling out illustrative examples such as Debian-generated keys (which share 
the flaw you mention), or, in more recent discussions, the ROCA-affected keys. 
Or, for the academic take, https://factorable.net/weakkeys12.extended.pdf , or 
the research at https://crocs.fi.muni.cz/public/papers/usenix2016 that itself 
appears to have lead to ROCA being detected.

 

Quite simply, the population you're targeting - "tiny communication devices ... 
with poor access to randomness" - are inherently insecure in a TLS world. TLS 
itself depends on entropy, especially for the ephemeral key exchange 
ciphersuites required for use in HTTP/2 or TLS 1.3, and so such devices do not 
somehow become 'more' secure by having the CA generate the key, but then 
negotiate poor TLS ciphersuites.

 

More importantly, the change you propose would have the incidental effect of 
making it more difficult to detect such devices and work with vendors to 
replace or repair them. This seems to overall make Mozilla users less secure, 
and the ecosystem less secure.

 

I realize that there is somewhat a conflict - we're today requiring that CDNs 
and vendors can generate these keys (thus masking off the poor entropy from 
detection), while not allowing the CA to participate - but I think that's 
consistent with a viewpoint that the CA should not actively facilitate 

Re: On the value of EV

2017-12-13 Thread Nick Lamb via dev-security-policy
On Wed, 13 Dec 2017 12:29:40 +0100
Jakob Bohm via dev-security-policy
 wrote:

> What is *programmatically* enforced is too little for human safety.
> believing that computers can replace human judgement is a big mistake.
> Most of the world knows this.

That's a massive and probably insurmountable problem then since the
design of HTTPS in particular and the way web browsers are normally
used is _only_ compatible with programmatic enforcement.

Allow me to illustrate:


Suppose you visit your bank's web site. There is a lovely "Green
Bar" EV certificate, and you, as a vocal enthusiast for the value of
Extended Validation, examine this certificate in considerable detail,
verifying that the business identified by the certificate is indeed
your bank. You are doubtless proud that this capability was available
to you.


You fill in your username and password and press "Submit". What happens?


Maybe your web browser finds that the connection it had before to
the bank's web site has gone, maybe it timed out, or there was a
transient network problem or a million other things. But no worry, you
don't run a web browser in order to be bothered with technical minutiae
- the browser will just make a new connection. This sort of thing
happens all the time without any trouble.

This new connection involves a fresh TLS setup, the server and browser
must begin again, the server will present its certificate to establish
identity. The web browser examines this certificate programmatically to
decide that it's OK, and if it is, the HTTPS form POST operation for
the log in form is completed by sending your username and password over
the new TLS connection.


You did NOT get to examine this certificate. Maybe it's the same one as
before, maybe it's slightly different, maybe completely different, the
hardware (let alone software) answering needn't be the same as last
time and the certificate needn't have any EV data in it. Your web
browser was happy with it, so that's where your bank username and
password were sent.

Even IF you decide now, with the new connection, that you don't trust
this certificate, it's too late. Your credentials were already
delivered to whoever had that certificate.



Software makes these trust decisions constantly, they take only the
blink of an eye, and require no human attention, so we can safely build
a world that requires millions of them. The moment you demand human
attention, you not only introduce lots of failure modes, you also use
up a very limited resource.

Perhaps you feel that when browsing the web you make a conscious
decision about trust for each site you visit. Maybe, if you are
extraordinarily cautious, you make the decision for individual web
pages. Alas, to be of any use the decisions must be taken for every
single HTTP operation, and most pages will use dozens (some hundreds)
of such operations.






___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA generated keys

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Wayne,
>
> For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> certificate
> at the same time should be allowed, and I have no problem with a
> requirement
> to delete the key after delivery.  I also think server side generation
> along
> the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC
> 7030
> is about client certificates, but in a world with lots of tiny
> communicating
> devices that interface with people via web browsers, there are lots of
> highly
> resource constrained devices with poor access to randomness out there
> running
> web servers.  And I think we are heading quickly towards that world.
> Tightening up the requirements to allow specific, approved mechanisms is
> fine.
> We don't want people doing random things that might not be secure.
>

Tim,

I'm afraid that the use case to justify this change seems to be inherently
flawed and insecure. I'm hoping you can correct my misunderstanding, if I
am doing so.

As I understand it, the motivation for this is to support devices with
insecure random number generators that might be otherwise incapable of
generating secure keys. The logic goes that by having the CAs generate
these keys, we end up with better security - fewer keys leaking.

Yet I would challenge that assertion, and instead suggest that CAs
generating keys for these devices inherently makes the system less secure.
As you know, CAs are already on the hook to evaluate keys against known
weak sets and reject them. There is absent a formal definition of this in
the BRs, other than calling out illustrative examples such as
Debian-generated keys (which share the flaw you mention), or, in more
recent discussions, the ROCA-affected keys. Or, for the academic take,
https://factorable.net/weakkeys12.extended.pdf , or the research at
https://crocs.fi.muni.cz/public/papers/usenix2016 that itself appears to
have lead to ROCA being detected.

Quite simply, the population you're targeting - "tiny communication devices
... with poor access to randomness" - are inherently insecure in a TLS
world. TLS itself depends on entropy, especially for the ephemeral key
exchange ciphersuites required for use in HTTP/2 or TLS 1.3, and so such
devices do not somehow become 'more' secure by having the CA generate the
key, but then negotiate poor TLS ciphersuites.

More importantly, the change you propose would have the incidental effect
of making it more difficult to detect such devices and work with vendors to
replace or repair them. This seems to overall make Mozilla users less
secure, and the ecosystem less secure.

I realize that there is somewhat a conflict - we're today requiring that
CDNs and vendors can generate these keys (thus masking off the poor entropy
from detection), while not allowing the CA to participate - but I think
that's consistent with a viewpoint that the CA should not actively
facilitate insecurity, which I fear your proposal would.

Thus, I would suggest that the current status quo - a prohibition against
CA generated keys - is positive for the SSL/TLS ecosystem in particular,
and any such devices that struggle with randomness should be dismantled and
replaced, rather than encouraged and proliferated.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA generated keys

2017-12-13 Thread Tim Hollebeek via dev-security-policy

Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate 
at the same time should be allowed, and I have no problem with a requirement 
to delete the key after delivery.  I also think server side generation along 
the lines of RFC 7030 (EST) section 4.4 should be allowed.  I realize RFC 7030 
is about client certificates, but in a world with lots of tiny communicating 
devices that interface with people via web browsers, there are lots of highly 
resource constrained devices with poor access to randomness out there running 
web servers.  And I think we are heading quickly towards that world. 
Tightening up the requirements to allow specific, approved mechanisms is fine. 
We don't want people doing random things that might not be secure.

As usual, non-TLS certificates have a completely different set of concerns. 
Demand for escrow of client/email certificates is much higher and the practice 
is much more common, for a variety of business reasons.

-Tim


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-13 Thread Ryan Sleevi via dev-security-policy
On Wed, Dec 13, 2017 at 6:29 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> > Yes. This is the foundation and limit of Web Security.
> >
> > https://en.wikipedia.org/wiki/Same-origin_policy
> >
> > This is what is programatically enforced. Anything else either requires
> new
> > technology to technically enforce it (such as a new scheme), or is
> > offloading the liability to the user.
> >
>
> What is *programmatically* enforced is too little for human safety.
> believing that computers can replace human judgement is a big mistake.
> Most of the world knows this.


That is a misguided and inaccurate rephrasing.

However, it still shows that you are fundamentally taking the view point
that:
1) Users should be responsible and bear the liability (straight up user
hostile)
2) This information is as critical as the one piece of truly guarantees
information, the URL (it isn’t)
3) It is a usable solution to require the visual determination as to
whether a given piece of information is present - that is, a positive
indicator (where both general studies AND browser specific studies show
this doesn’t work)

You aren’t adding to this, you’re simply phrasing your view that this
information is valuable. You haven’t responded to these points as to the
user experience, or the research, but instead theorize about how it should
be, or power users, or user education, all while ignoring the substance of
these realities.

>
> You need to understand that not every trust begins and ends with a
> Google search for a URL.


You need to understand that EV specifically states it is not for this
purpose. As already provided to you from the EVGs.

>
> Sometimes people buy cheaper items online and just need to know that
> their credit card transaction is not visible to a random company (hence
> the common practice of outsourcing the entry of card details to a
> reputable clearing service that promises not to hand the credit card
> number back to the seller).


EV does not provide this. This is just a basic understand of the technology.

Sometimes people make bigger purchases and
> need the assurance that there is a real company at the other end, which
> can (if necessary) be sued for non-delivery.


EV EXPLICITLY does not provide this. Read the EVGs.

Sometimes people make
> really big transactions and need to know that they are dealing with a
> real world entity that they have a real world trust relationship with.


EV EXPLICITLY does not provide this. Read the EVGs.

I have been copying the example name from message to message, with noone
> objecting.  Saving up this mistake for use as ammunition when you run
> out of arguments is not a nice way to argue.


Getting upset doesn’t undermine the fact that you’ve continued to make
mistakes that have already been addressed in both the original research and
past replies to you. The discussion has not been moved forward by the
points you’ve raised, because they’ve already been shown to be logically or
factually flawed and unsupported. I do hope that you will revisit these and
see how the points you’ve raised - even in this very message - are already
disputed by the research, design, and technology.

> The remainder of your argument basically boils down to "But Banks already
> > are offloading the liability to users when they say check for the green
> > bar" (and that is bad, user hostile, and unsustainable), and the "Look
> for
> > the corporate identity" has been shown repeatedly to be insufficient and
> > incomplete that if that is the response you'd offer, then it's not
> > introducing new information into the conversation.
> >
>
> No, I was using the awareness campaigns by banks as an example of how
> users can be, and have been, trained to use the EV UI even if they don't
> fully understand it.  It was a counterexample to your use of misleading
> statistics about how few users understand the nuances of EV
> certificates.


It is hardly a counter-example. It continues to be unsupported by data, by
the extant user studies contradicting your conclusions and belief - that
they are effective and users understand - and themselves still rely on the
fundamentally flawed approach of shifting the liability to the user to make
sense of the legal identity.

You have yet to respond to the substance of this basic model about users -
continuing to insist that somehow it’s reasonable to expect billions of
users to be aware of an interface that shows the jurisdictional nuance in a
critical UI point. It’s hnclear whether or not you even acknowledge the
current flaws - I would hope, given your earlier proposal to display the
full jurisdictional information, that you can at least acknowledge that EV
as it presently exists is insufficient UI and insufficient validation for
the status afforded it. At best, your view seems to be to double down on
promoting a user-hostile, unrealistic workflow, by adding even more
information (ignoring the research and basic 

Re: On the value of EV

2017-12-13 Thread Jakob Bohm via dev-security-policy

On 12/12/2017 22:51, Ryan Sleevi wrote:

On Tue, Dec 12, 2017 at 3:44 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


What you are writing below, with far too many words is that you think
that URLs are the only identities that matter in this world, and
therefore DV certificates are enough security for everyone.



Yes. This is the foundation and limit of Web Security.

https://en.wikipedia.org/wiki/Same-origin_policy

This is what is programatically enforced. Anything else either requires new
technology to technically enforce it (such as a new scheme), or is
offloading the liability to the user.



What is *programmatically* enforced is too little for human safety.
believing that computers can replace human judgement is a big mistake.
Most of the world knows this.

That is why there is such a thing as identity documents in the real
world.  Because humans often need to know who they are talking to, not
just that they have a vanity plate and a company logo on their white
van.

Humans have opinions about and relationships with other humans and
human-operated companies.  The prominent display of CA vetted identity
information in addition to the self-selected network address (URLs)
provides this information to human users as part of their decision
process.  The way the information is presented is very similar to how
such information is presented in real world trust scenarios: Id cards
pinned to the clothes or hanging around the neck.  Official business
license framed on the wall behind the counter.  Official health and
safety inspection report posted at the door.  People glance to see it is
there, occasionally reads just enough to see if it looks right, taking
comfort in the other party not knowing if today is the day you will do
actually read and not just glance.

You need to understand that not every trust begins and ends with a
Google search for a URL.  The more real the stakes are, the more real
the basis of trust needs to be.  Sometimes people are just commenting on
a blog and don't care much of the blogger is even a real person.
Sometimes people buy cheaper items online and just need to know that
their credit card transaction is not visible to a random company (hence
the common practice of outsourcing the entry of card details to a
reputable clearing service that promises not to hand the credit card
number back to the seller).  Sometimes people make bigger purchases and
need the assurance that there is a real company at the other end, which
can (if necessary) be sued for non-delivery.  Sometimes people make
really big transactions and need to know that they are dealing with a
real world entity that they have a real world trust relationship with.



Respectfully, I would encourage you to re-read both Ian's and James'
research. For example, you will find that the organization being discussed
is "Stripe, Inc", not "Spring, Inc" - a mistake made frequent enough to not
be charitably attributabed as a typo. The question about the level of
stringency on the validation requirements has also been responded to, as
well as the deficiencies of "Well, they'd have to lie to do so" as a
response.



I have been copying the example name from message to message, with noone
objecting.  Saving up this mistake for use as ammunition when you run
out of arguments is not a nice way to argue.


The remainder of your argument basically boils down to "But Banks already
are offloading the liability to users when they say check for the green
bar" (and that is bad, user hostile, and unsustainable), and the "Look for
the corporate identity" has been shown repeatedly to be insufficient and
incomplete that if that is the response you'd offer, then it's not
introducing new information into the conversation.



No, I was using the awareness campaigns by banks as an example of how
users can be, and have been, trained to use the EV UI even if they don't
fully understand it.  It was a counterexample to your use of misleading
statistics about how few users understand the nuances of EV
certificates.


I agree that we should be concerned about potential fraud, and there are
far more user-friendly technologies that can help mitigate that - as I
mentioned. That doesn't mean that getting rid of EV UI is throwing the
proverbial baby out - it means having the maturity to accept that some
technological experiments don't pan out, and as good engineers and
socially-responsible developers, we should recognize when certain features
are causing systemic harm to users overall security. I realize the innate
appeal to "Let users decide" by giving them an option, but a trivial survey
of human-computer interaction literature should reveal the flaw in that. If
that is too much to ask, reading about "Analysis Paralysis", "Decision
Fatigue", and "Information Overload" on Wikipedia should all provide
sufficient background context.


I am saying that your view of what the EV system achieves and has
already achieved is 

Re: Mississuance of EV Certificates

2017-12-13 Thread cornelia.enke--- via dev-security-policy
Am Dienstag, 12. Dezember 2017 18:04:55 UTC+1 schrieb Ryan Sleevi:
> On Tue, Dec 12, 2017 at 10:18 AM, Nick Lamb via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> >
> > > The implemented controls detected the misconfiguration within 24
> > > hours. The incorrect configuration was nevertheless recorded as a
> > > security incident. The handling of the security incident by the
> > > information security management team is still underway. Further
> > > measures will be decided within this process.
> >
> > I suspect I speak for others on m.d.s.policy when I ask that you let us
> > know of any such measures that are decided. This sort of incident could
> > happen to many CAs, there's no need for everybody to learn the hard way.
> >
> >
> Indeed, the purpose of incident reporting is not to shame CAs at all, but
> rather, to help all of us working together, collectively, build a more
> secure web.
> 
> Similarly, the goal is to understand not how people fail, but how systems
> fail - not "who was responsible" but "how was this possible"

This was a human error during the setup process. The problem could have been 
avoided if there had been restricting policies for the test setup. We are 
currently examining how we can define this as a long-term measure.

> 
> To that end, I think it would be beneficial if you could:
> - Share a timeline as to when to expect the next update. It seems like 72
> hours is a reasonable timeframe for the next progress update and
> information sharing.

We will give an update on Friday December 15th.

> - Explore and explain how the following was possible:
>   - 2017/12/04 2 p.m. UTC:   Test Setup with wrong configuration has been
> set up.
>   That is, it was detected during the "2017/12/11 2.30 p.m. UTC" internal
> review, which is good, but why wasn't it detected sooner - or even prior to
> being put in production?

Due to the fact that this was a test setup the regular review process was on a 
lower priority.
We will also reassess this review process within the long-term measures.


> Again, the goal is not to find who to blame, but to understand how systems
> fail, and how they can be made more robust. What privileges do personnel
> have can lead to discussions about "How should a CA - any CA - structure
> its access controls?" How was it possible to deploy the wrong configuration
> can help inform "How should a CA - any CA - handle change management?".
> 
> Our focus is on systems failure, not personal failure, because it helps us
> build better systems :)

You are correct – this must be the main focus in our long-term countermeasures.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy