Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-01 Thread Ilari Liusvaara
On Wed, May 31, 2017 at 03:49:03PM -0400, Victor Vasiliev wrote:
> On Tue, May 30, 2017 at 9:56 PM, Colm MacCárthaigh 
>  wrote:
> 
> > Here you argue, essentially, that it is too inconvenient to mitigate those
> > attacks for users. I don't think we can seriously take that approach.
> >
> > If the methods are too inconvenient, the secure alternative is to not use
> > 0-RTT at all.
> >
> > [snip]
> >
> 
> I think I am not getting my key point across here clearly.  I am not arguing
> that they are inconvenient, I am arguing that the guarantee you are trying
> to provide is impossible.

TLS level "sent data is delivered at most once" is very much possible.
But it requires synchronous state.

And it seems like where "few replays @TLS" would be easier than "no
replays @TLS", is where the replays would be to different servers, with
each server only accepting once. But "few replays" distributed among
different servers is much more dangerous than "few replays" to one
server.

Yes, residual replays will still come through even when TLS guarantees
"at most once" behavior. But turns out one already has to handle that
form of replay, due to wonders of web browser behavior (and reordering
attacks abusing timeouts). It is TLS not guaranteeing "at most once"
behavior (especially across servers) that enables new attacks.

This is fundamentially about what is REQUIRED to do 0-RTT without
nasty security holes. If you think this is too difficult, just don't
do 0-RTT. One extra RTT is a lot better than nasty attacks, with
potential consequences much worse than just some site getting DoSed
or some card chargebacks.



-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] I-D Action: draft-ietf-tls-dnssec-chain-extension-04.txt

2017-06-01 Thread internet-drafts

A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Transport Layer Security of the IETF.

Title   : A DANE Record and DNSSEC Authentication Chain 
Extension for TLS
Authors : Melinda Shore
  Richard Barnes
  Shumon Huque
  Willem Toorop
Filename: draft-ietf-tls-dnssec-chain-extension-04.txt
Pages   : 23
Date: 2017-06-01

Abstract:
   This draft describes a new TLS extension for transport of a DNS
   record set serialized with the DNSSEC signatures needed to
   authenticate that record set.  The intent of this proposal is to
   allow TLS clients to perform DANE authentication of a TLS server
   without needing to perform additional DNS record lookups.  It will
   typically not be used for general DNSSEC validation of TLS endpoint
   names.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-tls-dnssec-chain-extension/

There are also htmlized versions available at:
https://tools.ietf.org/html/draft-ietf-tls-dnssec-chain-extension-04
https://datatracker.ietf.org/doc/html/draft-ietf-tls-dnssec-chain-extension-04

A diff from the previous version is available at:
https://www.ietf.org/rfcdiff?url2=draft-ietf-tls-dnssec-chain-extension-04


Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Eric Rescorla's Discuss on draft-ietf-tls-ecdhe-psk-aead-04: (with DISCUSS and COMMENT)

2017-06-01 Thread Martin Rex
Watson Ladd wrote:
>Martin Rex  wrote:
>>
>> The suggestion to accept a recognized TLSv1.2 cipher suite code point
>> as an alternative indicator for the highest client-supported protocol
>> version is not really a "mechanism".  It's efficient (with 0-bytes on
>> the wire), intuitive and extremely backwards-compatible (will not upset
>> old servers, neither version-intolerant as the Win2008/2012 servers,
>> nor extension-intolerant servers.
> 
> It's a substantial change made after WG last call. That alone makes it
> improper. If you want to get WG consensus for such a change, go ahead.
> But don't try making this in the dead of night.

The proposed small addition of when the TLS cipher suites can be negotiated
is clearly *NOT* a change, and certainly not substantial.

Implementors that want to completely ignore this small addition
can do so and will remain fully compliant, they will not have to
change a single line of code.

For those implementing the proposed addition there will be two
very desirable effects:

  1) make more TLS handshakes succeed

  2) make more TLS handshakes use TLS protocol version TLSv1.2 rather
 than TLSv1.1 or TLSv1.0

come at an extremely low cost, and this addition has ZERO downsides.
The IETF is about promoting interoperability.

You seem to have a problem with either or both of the above outcomes,
but I fail to understand which and why.


> 
>> It's worse -- there are still TLS servers out there which choke on
>> TLS extensions (and TLS server which choke on extension ordering).
> 
> TLS 1.2 demands extensions work. Sending a TLS 1.2 hello without
> extensions is going to make it impossible to implement many features
> TLS 1.2 security relies on.

Actually, it does not.  TLSv1.2 works just fine without TLS extension,
although there are a few implementations in the installed base which
got this wrong.  rfc5246 appendix E.2 shows that TLSv1.2 interop with
extension-less ClientHellos was desired and assumed to be possible.
Some implementors got it wrong.



> 
>> It seems that there are others facing the same issue:
>>
>> https://support.microsoft.com/en-us/help/3140245/update-to-enable-tls-1.1-and-tls-1.2-as-a-default-secure-protocols-in-winhttp-in-windows
>>
>> and defer enabling to explicit customer opt-in.
>>
>>
>> Really, a very compatible and extremely robust and useful approach would
>> be to allow implied client protocol version indication through presence of
>> TLSv1.2-only cipher suite codepoints and this would allow large parts
>> of the installed base to quickly start using TLSv1.2--without breaking
>> existing usage scenarios and without the hazzle for users having to opt-in
>> and test stuff.
> 
> The people who have these problems are not "large parts" of the
> install base. They are large parts of *your* install base. Don't
> confuse these two.

The above WinHTTP issue alone applies to Win7, which is about 50% of
the installed base of Desktops PCs.

Refering to ~50% of the installed base as "large parts" seems OK to me. YMMV.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-01 Thread Colm MacCárthaigh
On Thu, Jun 1, 2017 at 1:50 PM, Victor Vasiliev  wrote:

> I am not sure I agree with this distinction.  I can accept the difference
> in
> terms of how much attacker can retry -- but we've already agreed that
> bounding
> that number is a good idea.  I don't see any meaningful distinction in
> other
> regards.
>

It's not just a difference in the number of duplicates. With retries, the
client maintains some control, so it can do things like impose delays and
update request IDs. Bill followed up with an exactly relevant example from
Token Binding where the retry intentionally has a different token value.
That kind of control is lost with attacker driven replays.

But even if we focus on just the number; there is something special about
allowing 0 literal replays of a 0-RTT section; it is easy for users to
confirm/audit/test. If there's a hard-guaranteee that 0-RTT "MUST" never be
replayable, then I feel like we have a hope of producing a viable 0-RTT
ecosystem. Plenty of providers may screw this up, or try to cut corners,
but if we can ensure that they get failing grades in security testing
tools, or maybe even browser warnings, then we can corral things into a
zone of safety. Otherwise, with no such mechanism, I fear that bad
operators will cause the entire 0-RTT feature to be tainted and entirely
turned off over time by clients.

>
> Sure, but this is just an argument for making N small.  Also, retrys can
> also
> be directed to arbitrary nodes.
>

This is absolutely true, but see my point about the client control.
Regardless, it is a much more difficult attack to carry out. That is to
intercept and rewrite a whole TCP connection Vs grabbing a 0-RTT section
and sending it again.


>
>
>> What concerns me most here is that people are clearly being confused by
>> the TLS 1.3 draft into mis-understanding how this interacts with 0-RTT. For
>> example the X-header trick, to derive an idempotency token from the binder,
>> that one experimental deployment innovated doesn't actually work because it
>> doesn't protect against the DKG attack. We're walking into rakes here.
>>
>
> Of course it doesn't protect against the DKG attack, but nothing at that
> layer
> actually does.
>
> This sounds like an issue with the current wording of the draft.  As I
> mentioned, I believe we should be very clear on what the developers should
> and
> should not expect from TLS.
>

Big +1 :)


> So, in other words, since we're now just bargaining about the value of N,
>>> operational concerns are a fair game.
>>>
>>
>> They're still not fair game imo, because there's a big difference between
>> permitting exactly
>> one duplicate, associated with a client-driven retry, and permitting huge
>> volumes of replays. They enable different kinds of attacks.
>>
>>
> Sure, but there's a space between "one" and "huge amount".
>

It's not just quantitive, it's qualitative too. But now I'm duplicating
myself more than once ;-)


> Well in the real world, I think it'll be pervasive, and I even think it
>> /should/ be. We should make 0-RTT that safe and remove the sharp edges.
>>
>
> Are you arguing that non-safe requests should be allowed to be sent via
> 0-RTT?
> Because that actually violates reasonable expectations of security
> guarantees
> for TLS, and I do not believe that is acceptable.
>

I'm just saying that it absolutely will happen, and I don't think any kind
of lawyering about the HTTP spec and REST will change that. Folks use GETs
for non-idempotent side-effect-bearing APIs a lot. And those folks don't
generally understand TLS or have anything to do with it. I see no real
chance of that changing and it's a bit of a deceit for us to think that
it's realistic that there will be these super careful 0-RTT deployments
where everyone from the Webserver administrator to the high-level
application designer is coordinating and fully aware of all of the
implications. It crosses layers that are traditionally quite far apart.

So with that in mind, I argue that we have to make TLS transport as secure
as possible by default, while still delivering 0-RTT because that's such a
beneficial improvement.


> I do not believe that this to be the case.  The DKG attack is an attack
>>> that allows
>>> for a replay.
>>>
>>
>> It's not. It permits a retry. The difference here is that the client is
>> in full control. It can decide to delay, to change a unique request ID, or
>> even not to retry at all. But the legitimate client generated the first
>> attempt, it can be signaled that it wasn't accepted, and then it generates
>> the second attempt. If it really really needs to it can even reason about
>> the complicated semantics of the earlier request being possibly
>> re-submitted later by an attacker.
>>
>
> That's already not acceptable for a lot of applications -- and by enabling
> 0-RTT for non-safe HTTP requests, we would be pulling the rug from under
> them.
>

Yep; but I think /this/ risk is manageable and tolerable. Careful clients,
like the token 

Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-01 Thread Eric Rescorla
I've just gone through this thread and I'm having a very hard time
understanding what the actual substantive argument is about.

Let me lay out what I think we all agree on.

1. As long as 0-RTT is declinable (i.e., 0-RTT does not cause
   connection failures) then a DKG-style attack where the client
   replays the 0-RTT data in 1-RTT is possible.

2. Because of point #1, applications must implement some form
   of replay-safe semantics.

3. Allowing the attacker to generate an arbitrary number of 0-RTT
   replays without client intervention is dangerous even if
   the application implements replay-safe semantics.

4. If implemented properly, both a single-use ticket and a
   strike-register style mechanism make it possible to limit
   the number of 0-RTT copies which are processed to 1 within
   a given zone (where a zone is defined as having consistent
   storage), so the number of accepted copies of the 0-RTT
   data is N where N is the number of zones.

5. Implementing the level of coherency to get #4 is a pain.

6. If you bind each ticket to a given zone, then you can
   get limit the number of accepted 0-RTT copies to 1
   (for that zone) and accepted 1-RTT copies to 1 (because
   of the DKG attack listed above).


Colm, Victor, do you disagree with this summary?

-Ekr






On Thu, Jun 1, 2017 at 4:59 PM, Colm MacCárthaigh  wrote:

>
>
> On Thu, Jun 1, 2017 at 1:50 PM, Victor Vasiliev 
> wrote:
>
>> I am not sure I agree with this distinction.  I can accept the difference
>> in
>> terms of how much attacker can retry -- but we've already agreed that
>> bounding
>> that number is a good idea.  I don't see any meaningful distinction in
>> other
>> regards.
>>
>
> It's not just a difference in the number of duplicates. With retries, the
> client maintains some control, so it can do things like impose delays and
> update request IDs. Bill followed up with an exactly relevant example from
> Token Binding where the retry intentionally has a different token value.
> That kind of control is lost with attacker driven replays.
>
> But even if we focus on just the number; there is something special about
> allowing 0 literal replays of a 0-RTT section; it is easy for users to
> confirm/audit/test. If there's a hard-guaranteee that 0-RTT "MUST" never be
> replayable, then I feel like we have a hope of producing a viable 0-RTT
> ecosystem. Plenty of providers may screw this up, or try to cut corners,
> but if we can ensure that they get failing grades in security testing
> tools, or maybe even browser warnings, then we can corral things into a
> zone of safety. Otherwise, with no such mechanism, I fear that bad
> operators will cause the entire 0-RTT feature to be tainted and entirely
> turned off over time by clients.
>
>>
>> Sure, but this is just an argument for making N small.  Also, retrys can
>> also
>> be directed to arbitrary nodes.
>>
>
> This is absolutely true, but see my point about the client control.
> Regardless, it is a much more difficult attack to carry out. That is to
> intercept and rewrite a whole TCP connection Vs grabbing a 0-RTT section
> and sending it again.
>
>
>>
>>
>>> What concerns me most here is that people are clearly being confused by
>>> the TLS 1.3 draft into mis-understanding how this interacts with 0-RTT. For
>>> example the X-header trick, to derive an idempotency token from the binder,
>>> that one experimental deployment innovated doesn't actually work because it
>>> doesn't protect against the DKG attack. We're walking into rakes here.
>>>
>>
>> Of course it doesn't protect against the DKG attack, but nothing at that
>> layer
>> actually does.
>>
>> This sounds like an issue with the current wording of the draft.  As I
>> mentioned, I believe we should be very clear on what the developers
>> should and
>> should not expect from TLS.
>>
>
> Big +1 :)
>
>
>> So, in other words, since we're now just bargaining about the value of N,
 operational concerns are a fair game.

>>>
>>> They're still not fair game imo, because there's a big difference
>>> between permitting exactly
>>> one duplicate, associated with a client-driven retry, and permitting
>>> huge volumes of replays. They enable different kinds of attacks.
>>>
>>>
>> Sure, but there's a space between "one" and "huge amount".
>>
>
> It's not just quantitive, it's qualitative too. But now I'm duplicating
> myself more than once ;-)
>
>
>> Well in the real world, I think it'll be pervasive, and I even think it
>>> /should/ be. We should make 0-RTT that safe and remove the sharp edges.
>>>
>>
>> Are you arguing that non-safe requests should be allowed to be sent via
>> 0-RTT?
>> Because that actually violates reasonable expectations of security
>> guarantees
>> for TLS, and I do not believe that is acceptable.
>>
>
> I'm just saying that it absolutely will happen, and I don't think any kind
> of lawyering about the HTTP spec and REST will change that. Folks use GETs
> for non-idempotent side-effect-bearing APIs a

Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-01 Thread Colm MacCárthaigh
On Thu, Jun 1, 2017 at 5:22 PM, Eric Rescorla  wrote:

> I've just gone through this thread and I'm having a very hard time
> understanding what the actual substantive argument is about.
>
> Let me lay out what I think we all agree on.
>

This is a good summary, I just have a few clarifications ...


> 1. As long as 0-RTT is declinable (i.e., 0-RTT does not cause
>connection failures) then a DKG-style attack where the client
>replays the 0-RTT data in 1-RTT is possible.
>

This isn't what I call a replay. It's a second request, but the client is
control of it. That distinction matters because the client can modify it if
it needs to be unique in some way and that turns out to be important for
some cases.

2. Because of point #1, applications must implement some form
>of replay-safe semantics.
>

Yep; though note that in some cases those replay-safe semantics themselves
actually depend on uniquely identifiable requests. For example a protocol
that depends on client-side-versioning, or the token-binding case.


> 3. Allowing the attacker to generate an arbitrary number of 0-RTT
>replays without client intervention is dangerous even if
>the application implements replay-safe semantics.
>

Yep.


> 4. If implemented properly, both a single-use ticket and a
>strike-register style mechanism make it possible to limit
>the number of 0-RTT copies which are processed to 1 within
>a given zone (where a zone is defined as having consistent
>storage), so the number of accepted copies of the 0-RTT
>data is N where N is the number of zones.
>

This is much better than the total anarchy of allowing completely unlimited
replay, and it does reduce the risk for side-channels, throttles etc, but I
wouldn't consider it a proper implementation or secure. Importantly it gets
us back to a state where clients may have no control over a deterministic
outcome.

Some clients need idempotency tokens that are consistent for duplicate
requests, this approach works ok then. Other kinds of clients also need
tokens that are unique to each request attempt, this approach doesn't work
ok in that case. That's the qualitative difference.

I'd also add that the suggested optimization here is clearly to support
globally resumable session tickets that are not scoped to a single site.
That's a worthy goal; but it's unfortunate that in the draft design it also
means that 0-RTT sections would be globally scoped. That's seems bad to me
because it's so hostile to forward secrecy, and hostile to protecting the
most critical user-data. What's the point of having FS for everything
except the requests, where the auth details often are, and which can
usually be used to generate the response? Synchronizing keys that can
de-cloak an arbitrary number of such sessions to many data centers spread
out across the world, seems just so defeating. I realize that it's common
today, I've built such systems, but at some point we have to decide that FS
either matters or it doesn't. Are users and their security auditors really
going to live with that? What is the point of rolling out ECDHE so
pervasively only to undo most of the benefit?

Maybe a lot of this dilemma could be avoided if the the PSKs that can be
used for regular resumption and for 0-RTT encryption were separate, with
the latter being scoped smaller and with use-at-most-once semantics.

5. Implementing the level of coherency to get #4 is a pain.
>
> 6. If you bind each ticket to a given zone, then you can
>get limit the number of accepted 0-RTT copies to 1
>(for that zone) and accepted 1-RTT copies to 1 (because
>of the DKG attack listed above).
>

Yep! Agreed :)

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls