Hi, The other benefit is being able to test that a critical code path produces the correct answers. With randomised k, this is not really possible. For instance, you can choose k with the top bit clear without any obvious or externally-testable effects, except effectively publishing your long-term private key after a large number of signatures[1].
Given the history of these things, I would perhaps challenge the assumption that all TLS stacks will have a bug-free, thread-safe, fork-safe, high quality, uncompromised, backdoor-free, unbiased random number generator :) Cheers, Joe [1]: http://people.rennes.inria.fr/Jean-Christophe.Zapalowicz/papers/asiacrypt2014.pdf On 23 January 2016 at 19:27, Jacob Maskiewicz <jmask...@eng.ucsd.edu> wrote: > The main argument I see from the RFC for deterministic ECDSA is computing k > on systems without high quality entropy sources. But any system running a > TLS stack is already going to have a high quality entropy source for > client/server randoms and IVs and such, so what's the benefit of > deterministic ECDSA here? > > -Jake M > > On Jan 23, 2016 11:13 AM, "Joseph Birr-Pixton" <jpix...@gmail.com> wrote: >> >> Hi, >> >> I'd like to propose that TLS1.3 mandates RFC6979 deterministic ECDSA. >> >> For discussion, here's a pull request with possible language: >> >> https://github.com/tlswg/tls13-spec/pull/406 >> >> Cheers, >> Joe >> >> _______________________________________________ >> TLS mailing list >> TLS@ietf.org >> https://www.ietf.org/mailman/listinfo/tls _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls