On Sun, Mar 13, 2016 at 12:21 PM, Eric Rescorla <e...@rtfm.com> wrote:

>
> On Sun, Mar 13, 2016 at 3:51 PM, Yoav Nir <ynir.i...@gmail.com> wrote:
>
>>
>> > On 13 Mar 2016, at 4:45 PM, Salz, Rich <rs...@akamai.com> wrote:
>> >
>> >> I also think it is prudent to assume that implementers will turn on
>> replayable
>> >> data even if nobody has figured out the consequences.
>> >
>> > I very much agree.  Customers, particularly those in the mobile field,
>> will look at this and say "I can avoid an extra RTT?  *TURN IT ON*" without
>> fully understanding, or perhaps even really caring about, the security
>> implications.
>>
>> Perhaps, and I think IoT devices are likely to do so as well.
>>
>> Is OpenSSL going to implement this? Are all the browsers?
>>
>
> There are already patches in preparation for this for NSS and I expect
> Firefox to
> implement it, as long as we have any indication that a reasonable numbers
> of
> servers will accept it.
>

I share some of the concerns expressed in this thread that 0RTT has the
risk of becoming an attractive nuisance.  Once browsers start supporting
it, server operators will feel competitive pressure to support it.  And
this will then put additional pressure onto more browsers to support it,
possibly pushing the edges where it is safe.

For example, when is HTTP GET safe vs not safe and who makes this call?
Especially if you have a browser that assumes that GET should be idempotent
and can be sent via 0RTT early data, a web developer whose never heard of
the word "idempotent" and builds an app with GET with side-effects but
assumes its safe since it's over TLS, and a server operator who is just
turning on a vendor provided feature which their users requested, not to
mention perhaps having a CDN or load-balancing-tls-terminating-proxy that
doesn't have a good way to convey the risks across two connections.  One
idea for HTTP that I'm increasingly in-favor of might be to define a new
method ("GET0"?)  and require that browsers use one of these new methods in
the early data request to expose this as high in the stack as possible.

It seems like there is a level of diminishing returns here if we compare
some of the options (not meant to be strictly ordered below):

1) We have old-school cleartext over TCP which has 1RTT before client data
(due to SYN/SYNACK)
2) We have TCP + TLS 1.[012] which has both the SYN/SYNACK plus the 2RTT
behavior means 3RTT before client data.
3) We have TCP TFO + TLS 1.3 1RTT which yields 1RTT before client data  (or
back to the same as #1 for resumption)
4) We have TCP (no TFO) + TLS 1.3 0RTT which also yields 1RTT before client
data for resumption
5) Then there's TCP TFO + TLS 1.3 0RTT which yields 0RTT before client data
for resumption
6) And finally there's old-school cleartext TCP TFO which has 0RTT before
client data, but which people are very hesitant to use for HTTP due to
replay issues.

Of these, #3 and #4 yield similar performance (with some limitations around
#3, such as requiring the same server IP or server IP prefix in some newer
drafts).

>From this, a few thoughts jump to mind from this:

* We get many of the same benefits 0RTT with using TCP TFO (TCP FastOpen)
and TLS 1.3 1RTT mode together as we get when using TLS 1.3 0RTT and stock
TCP.  TFO seems much safer here since its replay risks are at a lower level
that should be safe for TLS outside of the 0RTT context.  Note that there
are some issues with middleboxes sometimes breaking badly (blocking all
connections from a client IP for 30 seconds) when it tries to use TFO as
discussed recently in TCPM, but we may all want to focus some effort on
getting those fixed.
* We'll almost certainly want to make sure that any UDP-based protocol
(DTLS 1.3 or QUIC-over-TLS-1.3) can do a true 1RTT handshake safely in a
common case.  (ie, in a way that mirrors TCP TFO + TLS 1.3 1RTT.)  I
suspect this will be the bare minimum for getting QUIC to switch to use TLS
1.3.
* It seems like the risks around TLS 1.3 0RTT and TFO are similar (with TCP
being a protocol not trying to provide security properties).  If people
have been very wary to enable TFO for cleartext HTTP due to risks from
duplicated packets, shouldn't we be even more worried about TLS 1.3 0RTT
since the next-layer-up semantic issues and risks are similar, but TLS 1.3
0RTT potentially even has fewer mitigators?  (eg, we don't bind the server
IP in cryptographically to the request the client is making --- although
that might be an interesting addition to help make TLS 1.3 0RTT safer?)


There may also be some hacks that make TLS 1.3 0RTT marginally safer,
although I'm sure there are situations where they don't work and they may
just provide a false sense of security:
* Have the client include a time-delta-relative-to-PSK-issuance, as Martin
suggested.  (To allow the server to bound the duration of replay attacks.)
* Include the server IP in the client_hello for 0RTT  (to prevent replays
against different clusters).  There are a bunch of NAT-style scenarios
where this breaks, but it might help in a few places.  This also doesn't
help for server clusters using anycast.
* The anti-replay nonce (which helps in some places, but isn't possible to
implement sanely in many other scenarios).

At a minimum, I agree with the suggestion that we should require underlying
protocols to specify where and how it is safe to use.  (Although which of
the browsers discussing implementing is willing to wait for this draft for
HTTP to be mature enough?  I'm actually quite curious here.)

         Erik

[All opinions expressed here are my own.]
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to