On Fri, Sep 12, 2014 at 08:55:51AM +0300, Henri Sivonen wrote:
> On Thu, Sep 11, 2014 at 9:00 PM, Richard Barnes <rbar...@mozilla.com> wrote:
> >
> > On Sep 11, 2014, at 9:08 AM, Anne van Kesteren <ann...@annevk.nl> wrote:
> >
> >> On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes <rbar...@mozilla.com> 
> >> wrote:
> >>> Most notably, even over non-secure origins, application-layer encryption 
> >>> can provide resistance to passive adversaries.
> >>
> >> See https://twitter.com/sleevi_/status/509723775349182464 for a long
> >> thread on Google's security people not being particularly convinced by
> >> that line of reasoning.
> >
> > Reasonable people often disagree in their cost/benefit evaluations.
> >
> > As Adam explains much more eloquently, the Google security team has had an 
> > "all-or-nothing" attitude on security in several contexts.  For example, in 
> > the context of HTTP/2, Mozilla and others have been working to make it 
> > possible to send http-schemed requests over TLS, because we think it will 
> > result in more of the web getting some protection.
> 
> It's worth noting, though, that anonymous ephemeral Diffie–Hellman* as
> the baseline (as advocated in
> http://www.ietf.org/mail-archive/web/ietf/current/msg82125.html ) and
> unencrypted as the baseline with a trivial indicator to upgrade to
> anonymous ephemeral Diffie–Hellman (as
> https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 )
> are very different things.
> 
> If the baseline was that there's no unencrypted mode and every
> connection starts with anonymous ephemeral Diffie–Hellman, a passive
> eavesdropper would never see content and to pervasively monitor
> content, the eavesdropper would have to not only have the capacity to
> compute Diffie–Hellman for each connection handshake but would also
> have to maintain state about the symmetric keys negotiated for each
> connection and keep decrypting and re-encrypting data for the duration
> of each connection. This might indeed lead to the cost outcomes that
> Theodore T'so postulates.
> 
> https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00 is
> different. A passive eavesdropper indeed doesn't see content after the
> initial request/response pair, but to see all content, the level of
> "active" that the eavesdropper needs to upgrade to is pretty minimal.
> To continue to see content, all the MITM needs to do is to overwrite
> the relevant HTTP headers with space (0x20) bytes. There's no need to
> maintain state beyond dealing with one of those headers crossing a
> packed boundary. There's no need to adjust packet sizes. There's no
> compute or state maintenance requirement for the whole duration of the
> connection.
> 
> I have a much easier time believing that anonymous ephemeral
> Diffie–Hellman as the true baseline would make a difference in terms
> of pervasive monitoring, but I have a much more difficult time
> believing that an opportunistic encryption solution that can be
> defeated by overwriting some bytes with 0x20 with minimal maintenance
> of state would make a meaningful difference.
> 
> Moreover, https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-00
> has the performance overhead of TLS, so it doesn't really address the
> "TLS takes too much compute power" objection to https, which is the
> usual objection from big sites that might particularly care about the
> performance carrot of HTTP/2. It only addresses the objection to https
> that obtaining, provisioning and replacing certificates is too
> expensive. (And that's getting less expensive with HTTP/2, since
> HTTP/2 clients support SNI and SNI makes the practice of having to get
> host names from seemingly unrelated domains certified together
> obsolete.)
> 
> It seems to me that this undermines the performance carrot of HTTP/2
> as a vehicle of moving the Web to https pretty seriously. It allows
> people to get the performance characteristics of HTTP/2 while still
> falling short of the last step of to make the TLS connection properly
> authenticated.

 Do we really want all servers to have to authenticate themselves?  In
 most cases they probably should, but I suspect there are cases where
 you want to run a server, but have plausable deniability.  I haven't
 gone looking for legal precedent, but it seems to me cryptographically
 signing material makes it much harder to reasonably believe a denial.

> Is it really the right call for the Web to let people get the
> performance characteristics without making them do the right thing
> with authenticity (and, therefore, integrity and confidentiality)?
> 
> On the face of things, it seems to me we should be supporting HTTP/2
> only with https URLs even if one buys Theodore T'so's reasoning about
> anonymous ephemeral Diffie–Hellman.
> 
> The combination of
> https://twitter.com/sleevi_/status/509954820300472320 and
> http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
> is pretty alarming.

I agree that's bad, but I tend to believe anonymous ephemeral
Diffie–Hellman is good enough to deal with the Comcat's of the world,
and when it comes to the NSA we're pretty much just not going to be able
to force everyone to use something strong enough they can't beat it.
And as above I'd really rather not make life harder on people who want
to anonymously serve content (obviously they should really just use
.onion or something, but that's hard for users to get at :( ).

Trev

> 
> * In this message, I mean the general concept of DH—not necessarily
> the original discrete log flavor.
> -- 
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> _______________________________________________
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to