On Tue, Mar 15, 2016 at 4:59 PM, Bill Cox <waywardg...@google.com> wrote:
> I prefer your solution, but it would require getting the TLS 1.3 protocol > changed. TLS 1.3 seems to be geared towards making stateless resumption > with reusable tickets. > We keep circling that alright, and it doesn't work without sacrificing FS and/or replay-protection :( > The issue seems to be that in TLS 1.3, tickets are reusable multiple times > to resume sessions, so we can't store the TLS sequence number in the > ticket. If this were being used for an actual pre-shared key resumption, > it would probably be OK, but it seems to break security in multiple ways > when using tickets for TLS 1.2-style resumption, leaving us with no choice > but to emulate the continuation of the TLS sequence numbers (and PRF > state? Is that needed?). > Sorry; I confused things by saying with PRF-state, but what I had in mind there was turning the crank on the PRF to derive new encryption and mac keys for the resumed session. If you do it that way; and if the PRF provides anti-backtracking, then compromising the session cache can only be used to decrypt future resumed connections, rather than anything previously collected or currently active. If you're going to increment state for each resumption; might as well provide forward secrecy too. Protocols seem to fall into two camps. The ones that require replay > resistance from the TLS layer all seem to terminate TLS connections in the > same place so that the session cache can be safely and efficiently > synchronized, avoiding replay attacks even when using 0-RTT. For example, > when ssh-ing into a machine, that one machine terminates the connections, > and can have a shared cache. None of the 0-RTT TLS-layer replay attacks > I've read seem to work against a single server-side session cache. > I think I agree with that; but maybe would state it as: we can preserve forward secrecy and replay mitigation for 0RTT data, but at the cost of read-after-write consistency from a server-side data-store. The store needn't be transactional, an eventually consistent store could be used: but the miss-rate would be elevated over what we see today. So it would work better in the single server case than the many distributed servers case - but could work for both. > The other camp is HTTP-like protocols that terminate all over the place. > These all seem to be replay-tolerant because random network errors will > cause random replays, and there is no efficient way to synchronize the > session cache globally. > I think that's a dangerous position too. Triggering browser retries requires a certain level of difficulty and has a certain amplification factor. Sniffing wifi and sending a TCP message is far easier. An attacker could exhaust things like server-side request throttling limits far more quickly with the latter than the former. Besides: lowering our own levels of security because other tools have already done the same is just another race to the bottom. -- Colm
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls