On 24 November 2016 at 15:11, Colm MacCárthaigh <c...@allcosts.net> wrote:
> Do you disagree that the three specific example security issues provided are
> realistic, representative and impactful? If so, what would persuade you to
> change your mind?

These are simply variants on "if someone hits you with a stick, they
might hurt you", all flow fairly logically from the premise, namely
that replay is possible (i.e., someone can hit you with a stick).

The third is interesting, but it's also the most far-fetched of the
lot (a server might read some bytes, which it later won't read,
exposing a timing attack).  But that's also corollary material; albeit
less obvious.  Like I said, I've no objection to expanding a little
bit on what is possible: all non-idempotent activity, which might be
logging, load, and some things that are potentially observable on
subsequent requests, like IO/CPU cache state that might be affected by
a request.

>> I'm of the belief that end-to-end
>> replay is a property we should be building in to protocols, not just
>> something a transport layer does for you.  On the web, that's what
>> happens, and it contributes greatly to overall reliability.
>
> The proposal here I think promotes that view; if anything, it nudges
> protocols/applications to actually support end-to-end replay.

You are encouraging the TLS stack to do this, if not the immediate
thing that drives it (in our case, that would be the HTTP stack).  If
the point is to make a statement about the importance of the
end-to-end principle with respect to application reliability, the TLS
spec isn't where I'd go for that.

> The problems of 0-RTT are disproportionately under-estimated. I've provided
> what I think are three concrete and realistic security issues. If we
> disagree on those, let's draw that out, because my motivation is to mitigate
> those new issues that are introduced by TLS1.3.
>
>> What I object to here is the externalizing that this represents.  Now if I
>> have the audacity to
>> deploy 0-RTT, I have to tolerate some amount of extra trash traffic
>> from legitimate clients?
>
>
> I think there is a far worse externalization if we don't do this. Consider
> the operations who choose not (or don't know) to add good replay protection.
> They will iterate more quickly and more cheaply than the diligent providers
> who are cautious to add the effective measures, which are expensive and
> difficult to get right.

OK let's ask a different question: who is going to do this?

It's a non-trivial thing you ask for.  This involves a new connection
setup just to send a few packets, and probably a timer so that you can
wait long enough for the server to explode.  Do you expect to replay
before the real attempt?  Because that's even less likely to happen.

Connections aren't cheap, bandwidth as well.  And the time requirement
for building such a feature would be better spent elsewhere, not to
mention the ongoing maintenance.

I don't see browsers doing anything like what you request; nor do I
see tools/libs like curl or wget doing it either.  If I'm wrong and
they do, they believe in predictability so won't add line noise
without an application asking for it.

If few enough people do this, what makes you think that such a tiny
amount of replay would make any difference?  Unless you replay with
high probability, then the odd error will be dismissed as a transient.
This is especially true because most of these sorts of exploitable
errors will happen adjacent to some sort of network glitch (the
requests at the start of most connections - on the web at least - are
pretty lame).

Then you all you have done is increased the global rate of "have you
turned it off and on again?", which is - after all - yet another
opportunity for replay.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to