On Fri, Oct 09, 2015 at 10:23:09PM +0200, Eric Rescorla wrote:

> It's largely arbitrary, but the reasoning is as follows. There are
> apparently some TLS 1.2 servers which randomly generate the entire server
> random (and https://tools.ietf.org/html/draft-mathewson-no-gmtunixtime-00 
> would
> encourage more to do so). The chance of a false positive between such
> a server and a TLS 1.3 client is 2^{-32}, which seemed a bit high.

Yes, law of large numbers.  While any one session is unlikely to
fail, the chance of some sessions failing needlessly becomes
appreciable, and some of failures may be have high criticality and
be latency sensitive.  

Of course packet corruption undetected by TCP checksums is perhaps
more likely, and the client should treat this like as data corruption,
rather than a MITM attack, and just reconnect and try again.

The problem is that reconnecting is not something the TLS layer
can do.  And applications might not be instrumented to recover as
quickly or as gracefully as they should.  Plus, not all use of TLS
is over unreliable network transports.  TLS is also used over Unix
domain sockets, where one does not expect either MITM attacks or
data corruption.

So even 2^{-48} is perhaps not quite low enough.  I still think
we're better off fixing serious problems in TLS 1.2 as we find
them regardles of whether the server and client happen to also
support 1.3.

-- 
        Viktor.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to