>>>>> "Osma" == Osma Ahvenlampi <[EMAIL PROTECTED]> writes:

 Osma> Arnold G. Reinhold <[EMAIL PROTECTED]> writes:
 >> 1. Mr. Kelsey's argument that entropy should only be added in
 >> large quanta is compelling, but I wonder if it goes far enough. I
 >> would argue that entropy collected from different sources (disk,
 >> network, sound card, user input, etc.) should be collected in
 >> separate pools, with each pool taped only when enough entropy has
 >> been collected in that pool.

 Osma> You have to realize that /dev/random entropy collection doesn't
 Osma> get one bit, add it to the pool, and increment the entropy
 Osma> counter....

 Osma> So, for each 40 bits mixed into the pool, a few bits of entropy
 Osma> is credited. How do you propose quantizing this?

I think this is pretty simple.

Right now there's one pool, which is where new stuff is stirred in and 
then a hash is done over it (that's the outline, the details are a bit 
more involved).

The most straightforward way to do what's proposed seems to be like
this:

1. Make two pools, one for /dev/random, one for /dev/urandom.  The
former needs an entropy counter, the latter doesn't need it.

2. Create a third pool, which doesn't ned to be big.  That's the
entropy staging area.  It too has an entropy counter.

3. Have the add entropy function stir into that third pool, and credit 
its entropy counter.

4. Whenever the entropy counter of the staging pool exceeds N bits (a
good value for N is probably the hash length), draw N bits from it,
and debit its entropy counter by N.

If the entropy counter of the /dev/random pool is below K% of its
upper bound (K = 75 has been suggested) stir these N bits into the
/dev/random pool.  Otherwise, alternate between the two pools.  Credit 
the pool's entropy counter by N.

The above retains the basic structure, its mixing algorithms, entropy
bookkeeping, etc.  The major delta is the multiple pools and the
carrying of entropy from the staging pool to the others.

        paul

Reply via email to