On Wednesday, December 13, 2017 at 12:50:38 PM UTC-6, Ryan Sleevi wrote:
> On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman <mharde...@gmail.com>
> wrote:
> 
> > As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> > happen assuming a stateful DPRNG with a decent starting entropy corpus.
> >
> 
> Agreed - but that's also true for the devices Tim is mentioning.

I do not mean this facetiously.  If I kept a diary, I might make a note.  I 
feel like I've accomplished something.

> 
> Which I guess is the point I was trying to make - if this can be 'fixed'
> relatively easily for the use case Tim was bringing up, what other use
> cases are there? The current policy serves a purpose, and although that
> purpose is not high in value nor technically rigorous, it serves as an
> external check.
> 
> And yes, I realize the profound irony in me making such a comment in this
> thread while simultaneously arguing against EV in a parallel thread, on the
> basis that the purpose EV serves is not high in value nor technically
> rigorous - but I am having trouble, unlike in the EV thread, understanding
> what harm is caused by the current policy, or what possible things that are
> beneficial are prevented.

I, for one, respect that you pointed out the dichotomy.  I think I understand 
it.

I believe that opening the door to ca-side key generation under specific terms 
and circumstances offers an opportunity for various consumers of PKI key pairs 
to acquire higher quality key pairs than a lot of the alternatives which would 
otherwise fill the void.

> 
> I don't think we'll see significant security benefit in some circumstances
> - I think we'll see the appearances of, but not the manifestation - so I'm
> trying to understand why we'd want to introduce that risk?

Sometime we accept one risk, under terms that we can audit and control, in 
order to avoid the risks which we can reasonably predict the rise of in a 
vacuum.  I am _not_ well qualified to weigh this particular set of risk 
exposures, most especially in the nature of the risk of an untrustworthy CA 
intentionally acting to cache these keys, etc.  I am well qualified to indicate 
that both risks exist.  I believe they should probably be weighed in the nature 
of a "this or that" dichotomy.

> 
> I also say this knowing how uninteroperable the existing key delivery
> mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
> protection of those are. Combine that with CAs repeated failure to
> correctly implement the specs that are less ambiguous, and I'm worried
> about a proliferation of private keys flying around - as some CAs do for

It _is_ absolutely essential that the question of secure transport and 
destruction be part of what is controlled for and monitored in a scheme where 
key generation by the CA is permitted.  The mechanism becomes worse than almost 
everything else if that falls apart.


> their other, non-TLS certificates. So I see a lot of potential harm in the
> ecosystem, and question the benefit, especially when, as you note, this can
> be mitigated rather significantly by developers not shoveling crap out the
> door. If developers who view "time to market" as more important than
> "Internet safety" can't get their toys, I ... don't lose much sleep.

Aside from the cryptography enthusiast or professional, it is hard to find 
developers with the right intersection of skill and interest to address the 
security implications.  It becomes complicated further when security 
implications aren't necessarily a business imperative.  Further complicated 
when the customer base realizes it has real costs and begins to question the 
value.  It's not just the developers.  The trend of good _looking_ quick 
reference designs lately is that they have a great spec sheet and take every 
imaginable short cut where the requirements are not explicitly stated and 
audited.  It's an ecosystem problem that is really hard to solve.

A couple of years ago, I and my team were doing interop testing between a 
device and one of our products.  In that course of events, we discovered a 
nasty security issue that was blatantly obvious to someone skilled in our 
particular application area.  We worked with the manufacturer to trace the 
product design back to a reference design from a Chinese ODM.  They were 
amenable to fixing the issue ultimately, but we found at least 14 affected 
distinct products in the marketplace based upon that design that did pull in 
those changes as of a year later.

Even as the line between hardware engineer and software developer get more and 
more blurred, there remains a stark division of skill set, knowledge base, and 
even understanding of each others' needs.  That's problematic.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to