> Such failures would be indistinguishable from other occurrences that
> would result in the client and server not deriving the same shared
> secrets (e.g., random bit corruptions, computation errors, malicious
> modification of data in transit)

Offhand I would think that this indistinguishability claim is wrong. Did
I miss some paper making this claim?

Details: The normal situation is that the failure rate (for legitimately
generated ciphertexts) depends on the decoding algorithm. The original
NTRU paper explains how to considerably reduce the NTRU failure rate via
straightforward techniques such as trying an extra +-q at each position;
recognizing the right answer is easy in the reencryption context. Most
Kyber failures should be similarly feasible to recognize, distinguishing
these failures from, e.g., random bit corruptions.

> and so would be handled in the same way.

This is not how a spec should be written. If there's a concern that
implementors will deviate from computing the specified KEM session key,
the way to address the concern (within the limitations of a spec) is to
have the spec include a crystal-clear requirement of always computing
the specified KEM session key. (Beyond a spec, what's most important is
tests; tests seem feasible for the most obvious deviations.)

At this point there have been quite a few proposals that seem to be
allowing deviations (but then using overstated claims to _discourage_
deviations). I don't know why one would want to allow deviations here.

---D. J. Bernstein

_______________________________________________
TLS mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to