It sounds like there are four properties in play here:

P1: Either side can rekey its outgoing traffic whenever it wants.

P2: Either side can "force an update to the entire connection," i.e.
can ask the other side to rekey *its* outgoing traffic.

P3: A side can learn that P1 has been read by the other side.

P4: Neither side can cause the other to accrue an unbounded deferred
write obligation; in fact the maximum accruable deferred write
obligation is one KeyUpdate.

The current draft has P1 and P2 only.

My view: all four properties are important.

I've previously argued for the benefit of P3.

Re: P2, there seems to be some disagreement about the value. I think
it is -- one endpoint may well have knowledge (about the traffic or
its future susceptibility to compromise) that makes it want to ratchet
the session. Forward secrecy is not only valuable to the sender. I
don't think it's enough to recommend that each side rekey its own
direction occasionally.

I don't think it's a particularly hard design or implementation
challenge to get all four properties.

Here a simple explicit and verbose design as a straw-man, that gets
P1, P2, P3, and P4:

1) As David proposes, separate the tracks and have two KeyUpdate ladders.

2) Define KeyUpdate like this:

   struct {
       uint64 desired_minimum_receive_generation;
       uint64 current_receive_generation;
   } KeyUpdate;

Language: "An implementation MAY update its send keys by sending a
KeyUpdate. An implementation MAY request that the other side update
its send keys by increasing the desired_minimum_receive_generation. An
implementation MUST NOT set desired_minimum_receive_generation to be
greater than its current_receive_generation plus one."

"Upon receiving a KeyUpdate, the receiver MUST increment their
receiving generation by one. If the desired_minimum_receive_generation
is greater than its current send generation, the receiver MUST update
its send keys and send a KeyUpdate. If the
desired_minimum_receive_generation is greater than the current send
generation plus one, the receiver SHOULD abort."

-Keith

On Thu, Aug 18, 2016 at 10:26 AM, David Benjamin <david...@chromium.org> wrote:
> On Thu, Aug 18, 2016 at 1:08 PM Benjamin Kaduk <bka...@akamai.com> wrote:
>>
>> On 08/17/2016 11:29 PM, David Benjamin wrote:
>>
>> However, we lose the "free" (really at the cost of this unboundedness
>> problem) generation-based bidirectional KeyUpdate sync. Depending on what we
>> want out of KeyUpdate, I can imagine a few options:
>>
>>
>> My recollection is that the only reason we have KeyUpdate is to avoid
>> going over the limit for amount of ciphertext encrypted in the same key
>> within our safety margin (recall the analyses debating whether such limits
>> would even be reachable for the ciphers currently in use, as well as the
>> disagreement as to what the safety margin should be).
>>
>>
>> - Don't. KeyUpdates are unilateral. Recommend in the spec to KeyUpdate
>> every N records or so and leave it at that. (I think this is the best option
>> on grounds of simplicity, assuming it meets the primary needs of KeyUpdate.)
>>
>>
>> If we're in a world where implementations are considering leaving out the
>> required "catch up" KeyUpdate, we may want to consider alternative options
>> to put in the spec that are easier to implement.  That is, we can write the
>> spec in various ways to get the functionality that both write directions get
>> the key updates needed to avoid the ciphertext limit, and we are reliant on
>> implementations to follow the spec in order to get that safety property.
>> So, given that we can only get the safety property if all implementations
>> follow the spec, why not just ... require implementations to track the
>> amount sent and rekey if it's too close to the limit?  That is consistent
>> with a unilateral/unidirectional KeyUpdate, and having per-connection
>> byte/record counters does not seem to be a huge overhead.
>>
>> Keeping the two write directions' keys independent would also avoid
>> conflict when the two peers disagree on how often a rekey is necessary (as
>> might happen if a resource-constrained device decided to not implement byte
>> counters and just rekey after every N records, which is "free" since you
>> need to track serial numbers anyway).
>
>
> Yup. If we say implementations SHOULD/MUST/whatever send KeyUpdates
> frequently and we're fine with just saying we rely on each sender to honor
> that w.r.t. their own send keys with little other fanfare, I think this
> option is the clear winner. We're already relying on the sender to not, say,
> exfiltrate all data somewhere nasty.
>
> There seemed to be other motivations for KeyUpdate (Keith's passive observer
> use case, I've heard some theories around one side knowing more about the
> cipher than another, etc.), so I left that alone for now. I'm not currently
> convinced by those use cases, but perhaps the working group feels
> differently. I'm reasonably confident splitting the key tracks is correct
> regardless, but I wasn't sure which KeyUpdate motivations were considered
> important and which weren't.
>
>>
>> - If you receive a KeyUpdate and didn't send one in the last N minutes
>> (where N minutes >>> 1 RTT), (optionally?) flag your next write to be
>> preceded by KeyUpdate. This is simple but needs an ad-hoc timeout to prevent
>> ping-ponging.
>>
>> - Variations on sticking a please_echo boolean in the KeyUpdate message,
>> generation counts, etc., to get synchronization with coalescing if we need
>> it. I would much prefer the simpler options unless we truly need this.
>> (KeyUpdate is right now a *required* feature, so simplicity should be a
>> priority. Rare use cases are what extensions are for.)
>>
>> Thoughts?
>>
>> David
>>
>> PS: We've seen this before with renego (I've seen many OpenSSL consumers
>> which lock up if the peer renegotiation), error alerts triggered on reads
>> (often they don't get sent), and close_notify (also don't get sent in
>> practice). Deviations from dumb filters are expensive.
>>
>>
>> Yes, simple is better.  Well, here at least :)
>>
>>
>> PPS: I'm wary of over-extending post-handshake auth for the same reason,
>> though I haven't had time to look carefully at the latest proposal yet.
>> Still, having TLS specify and analyze the crypto while lifting the actual
>> messages into application-level framing would be preferable for these sorts
>> of side protocols. The application gets to make assumptions about read/write
>> flows and knows in which contexts what is and isn't allowed.
>>
>>
>> Hmm, so [in the above proposal] that would involve the application doing
>> the byte/record counters and pushing a "give me a rekey packet now" button
>> when needed?  I am not sure that I'm comfortable moving "crypto sensitive"
>> code (well, sort-of) into the application, since many applications won't do
>> it.  Also, not all applications will have access to the record serial
>> number, as I understand it, and the formulas I remember for the ciphertext
>> limits involved both records and bytes.
>
>
> Sorry, that was unclear. That was more of a side comment to help explain why
> I have such strong visceral reactions to any post-handshake auth mechanisms
> since part of it is the same issue. I am not proposing we lift KeyUpdate to
> the application layer. I'm proposing we split the KeyUpdate tracks in two
> and do [insert preferred option here] w.r.t. the bidirectionality issue.
> Probably should not have mentioned it since it's mostly a distraction for
> this topic. I ramble a lot. :-)
>
> David
>
> _______________________________________________
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to