Hi all,

Here's another issue we noticed with RFC 9147: (There's going to be a few
of these emails. :-) )

DTLS 1.3 allows senders to pick an 8-bit or 16-bit sequence number. But,
unless I missed it, there isn't any discussion or guidance on which to use.
The draft simply says:

> Implementations MAY mix sequence numbers of different lengths on the same
connection

I assume this was patterned after QUIC, but looking at QUIC suggests an
issue with the DTLS 1.3 formulation. QUIC uses ACKs to pick the minimum
number of bytes needed for the peer to recover the sequence number:
https://www.rfc-editor.org/rfc/rfc9000.html#packet-encoding

But the bulk of DTLS records, app data, are unreliable and not ACKed. DTLS
leaves all that to application. This means a DTLS implementation does not
have enough information to make this decision. It would need to be
integrated into the application-protocol-specific reliability story, if the
application protocol even maintains that information.

Without ACK feedback, it is hard to size the sequence number safely.
Suppose a DTLS 1.3 stack unconditionally picked the 1-byte sequence number
because it's smaller, and the draft didn't say not to do it. That means
after getting out of sync by 256 records, either via reordering or loss,
the connection breaks. For example, if there was a blip in connectivity and
you happened to lose 256 records, your connection is stuck and cannot
recover. All future records will be at higher and higher sequence numbers.
A burst of 256 lost packets seems within the range of situations one would
expect an application to handle.

(The 2-byte sequence number fails at 65K losses, which is hopefully high
enough to be fine?  Though it's far far less than what QUIC's 1-4-byte
sequence number can accommodate. It was also odd to see no discussion of
this anywhere.)

David
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to