On Fri, Jan 01, 2016 at 01:54:00PM +0100, Henrick Wibell Hellström wrote:
> I think it is a good idea to rekey AES-GCM after approximately 2^32 records,
> give or take a few magnitudes.
> 
> The question for me isn't whether AES-GCM requires frequent rekeying (it
> does), but exactly how much complexity the rekeying mechanism would add, to
> the protocol and to implementations.

There have been three basic proposed categories of key update schemes:

1) Ratchet keys through KDF, no new randomness.
2) Ratchet keys through KDF, with new randomness
3) Do new PK key exchange

Also, if one doesn't want applications to have to be aware of rekeying,
one can't introduce new flights ("fully asynchronous key update"). This
would cause _severe_ constraints.

Then there is strictly weaker constraint that one can't introduce "dead
air", where the protocol can't make forward progress without waiting
for response (the previous constraint is sufficient but not necressary
to meet this).

Firstly, 1)

The current scheme is example of 1) without new flights. Basically,
send a message to ratchet sender's keys. There is no symmetry-breaking
between updates, so if k(i) = k(j), then k(i+n) = k(j + n) (such
symmetry-breaking could be introduced by adding key number to KDF
inputs).

Also, I noted that the current scheme seems to break HKDF pairing:
It tries to expand already expanded value (traffic secret). This
violates pairing.

Also, scheme like this could nicely work with DTLS: One could use
epoch numbers to signal rekeying (but one would then take care
not wrap epoch in way that could cause confusion).



Then 2)

Doing 2) without new flights would imply:

- Two traffic keys, one for each direction. The low-level ciper keys
  derive from traffic key of respective direction.
- Sending message to update keys would update only the traffic key
  in sending direction, regenerating low-level cipher keys and resetting
  the RSN in that direction.
- Entropy from peer can't be explicitly be used. The best one could
  do here is to recommend mixing last received ratchet entropy from
  peer into own entropy to produce the random value to send when
  replying (when that happens).

I don't think this is terribly more complicated than 1). But it is
more complicated nevertheless.

I don't see dropping the stronger full asynchronous constraint would
simplify things, due to possibility of "crossed rekey" caused by
finite "speed of light".

Also, this would cause issues with DTLS: One would have to reliably
transmit these rekeying messages, whereas previous DTLS are reliable
only within handshake.



Finally 3)

Now the rekeying necressarily involves two messages. Doing this
fully asynchronously seems just about impossible (updating one half-
key at a time would hit "key caching" problem).

Then it would be whole new logic for performing PK exchange. One
can't reuse handshake because it would introduce thorny issues about
tying to previous handshake and ensuring that state doesn't change
in all sorts of unexpected ways. It also would need to somehow
prevent crossing a key update.


All in all, the complexity looks grealy greater than 1) or 2).



-Ilari

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to