* Jason Holt:
You may be correct, but readers should also know that, at least in Linux:
/usr/src/linux/drivers/char/random.c:
* All of these routines try to estimate how many bits of randomness a
* particular randomness source. They do this by keeping track of the
* first and second
On Sunday 03 July 2005 05:21, Don Davis wrote:
From: Charles M. Hannum [EMAIL PROTECTED]
Date: Fri, 1 Jul 2005 17:08:50 +
While I have found no fault with the original analysis,
...I have found three major problems with the way it
is implemented in current systems.
hi, mr. hannum
So the funny thing about, say, SHA-1, is if you give it less than 160
bits of data, you end up expanding into 160 bits of data, but if you
give it more than 160 bits of data, you end up contracting into 160 bits
of data. This works of course for any input data, entropic or not.
Hash saturation?
From: Charles M. Hannum [EMAIL PROTECTED]
Sent: Jul 3, 2005 7:42 AM
To: Don Davis [EMAIL PROTECTED]
Cc: cryptography@metzdowd.com
Subject: Re: /dev/random is probably not
...
Also, I don't buy for a picosecond that you have to gather
all timings in order to predict the output. As we know
from
Most implementations of /dev/random (or so-called entropy gathering daemons)
rely on disk I/O timings as a primary source of randomness. This is based on
a CRYPTO '94 paper[1] that analyzed randomness from air turbulence inside the
drive case.
I was recently introduced to Don Davis and, being
On Fri, 1 Jul 2005, Charles M. Hannum wrote:
Most implementations of /dev/random (or so-called entropy gathering daemons)
rely on disk I/O timings as a primary source of randomness. This is based on
a CRYPTO '94 paper[1] that analyzed randomness from air turbulence inside the
drive case.
I