Menezes on HQMV
There's an interesting paper up on eprint now: http://eprint.iacr.org/2005/205 Another look at HMQV Alfred Menezes HMQV is a `hashed variant' of the MQV key agreement protocol. It was recently introduced by Krawczyk, who claimed that HMQV has very significant advantages over MQV: (i) a security proof under reasonable assumptions in the (extended) Canetti-Krawczyk model for key exchange; and (ii) superior performance in some situations. In this paper we demonstrate that HMQV is insecure by presenting realistic attacks in the Canetti-Krawczyk model that recover a victim's static private key. We propose HMQV-1, a patched version of HMQV that resists our attacks (but does not have any performance advantages over MQV). We also identify the fallacies in the security proof for HMQV, critique the security model, and raise some questions about the assurances that proofs in this model can provide. Obviously, this is of inherent interest, but it also plays a part in the ongoing debate about the importance of proof as a technique for evaluating cryptographic protocols. -Ekr - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
/dev/random is probably not
Most implementations of /dev/random (or so-called entropy gathering daemons) rely on disk I/O timings as a primary source of randomness. This is based on a CRYPTO '94 paper[1] that analyzed randomness from air turbulence inside the drive case. I was recently introduced to Don Davis and, being the sort of person who rethinks everything, I began to question the correctness of this methodology. While I have found no fault with the original analysis (and have not actually considered it much), I have found three major problems with the way it is implemented in current systems. I have not written exploits for these problems, but I believe it is readily apparent that such exploits could be written. a) Most modern IDE drives, at least, ship with write-behind caching enabled. This means that a typical write returns a successful status after the data is written into the drive's buffer, before the drive even begins the process of writing the data to the medium. Therefore, if we do not overflow the buffer and get stuck waiting for previous data to be flushed, the timing will not include any air turbulence whatsoever, and should have nearly constant time. b) At least one implementation uses *all* disk type devices -- including flash devices, which we expect to have nearly constant time -- for timing. This is obviously a bogus source of entropy. c) Even if we turned off write-behind caching, and so our timings did include air turbulence, consider how a typical application is written. It waits for, say, a read() to complete and then immediately does something else. By timing how long this higher-level operation (read(), or possibly even a remote request via HTTP, SMTP, etc.) takes, we can apply an adjustment factor and determine with a reasonable probability how long the actual disk I/O took. Using any of these strategies, it is possible for us to know the input data to the RNG -- either by measurement or by stuffing -- and, therefore, quite possibly determine the future output of the RNG. Have a nice holiday weekend. [1] D. Davis, R. Ihaka, P.R. Fenstermacher, Cryptographic Randomness from Air Turbulence in Disk Drives, in Advances in Cryptology -- CRYPTO '94 Conference Proceedings, edited by Yvo G. Desmedt, pp.114--120. Lecture Notes in Computer Science #839. Heidelberg: Springer-Verlag, 1994. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: /dev/random is probably not
On Fri, 1 Jul 2005, Charles M. Hannum wrote: Most implementations of /dev/random (or so-called entropy gathering daemons) rely on disk I/O timings as a primary source of randomness. This is based on a CRYPTO '94 paper[1] that analyzed randomness from air turbulence inside the drive case. I was recently introduced to Don Davis and, being the sort of person who rethinks everything, I began to question the correctness of this methodology. While I have found no fault with the original analysis (and have not actually considered it much), I have found three major problems with the way it is implemented in current systems. I have not written exploits for these [...] You may be correct, but readers should also know that, at least in Linux: /usr/src/linux/drivers/char/random.c: * All of these routines try to estimate how many bits of randomness a * particular randomness source. They do this by keeping track of the * first and second order deltas of the event timings. And then the inputs are run through a SHA hash before being released through /dev/random. -J - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]