At 10:09 AM 8/2/99 -0400, Paul Koning wrote:
>
>1. Estimating entropy.  Yes, that's the hard one.  It's orthogonal
>from everything else.  /dev/random has a fairly simple approach;
>Yarrow is more complex.
>
>It's not clear which is better.  If there's reason to worry about the
>one in /dev/random, a good solution would be to include the one from
>Yarrow and use the smaller of the two answers.

Hard?  That's much worse than hard.  In general, it's impossible in
principle to look at a bit stream and determine any lower bound on its
entropy.  Consider the bitstream produced by a light encoding of /dev/zero.
 If person "A" knows the encoding, the conditional entropy is zero.  If
person "B" hasn't yet guessed the encoding, the conditional entropy is large.

Similar remarks apply to physical entropy:  I can prepare a physical system
where almost any observer would measure lots of entropy, whereas someone
who knew how the system was prepared could easily return it to a state with
10**23 bits less apparent entropy.  Example: spin echoes.

>2. Pool size.  /dev/random has a fairly small pool normally but can be 
>made to use a bigger one.  Yarrow argues that it makes no sense to use 
>a pool larger than N bits if an N bit mixing function is used, so it
>uses a 160 bit pool given that it uses SHA-1.  I can see that this
>argument makes sense.  (That suggests that the notion of increasing
>the /dev/random pool size is not really useful.)

Constructive suggestion:  given a RNG that we think makes good use of an N
bit pool, just instantiate C copies thereof, and combine their outputs.
ISTM this should produce something with N*C bits of useful state.

>5. "Catastrophic reseeding" to recover from state compromise.
>
>So while this attack is a bit of a stretch, defending against it is
>really easy.  It's worth doing.

I agree.  As you and Sandy pointed out, one could tack this technology onto
/dev/urandom and get rid of one of the two main criticisms.

And could we please call it "quantized reseeding"?  A catastrophe is
usually a bad thing.

>6. Inadequate entropy sources for certain classes of box.

If the TRNG has zero output, all you can do is implement a good PRNG and
give it a large, good initial seed "at the factory".

If the TRNG has a very small output, all you can do is use it wisely.
Quantized reseeding appears to be the state of the art.

--------------

There's one thing that hasn't been mentioned in any of the "summaries", so
I'll repeat it:  the existing /dev/urandom has the property that it uses up
*all* the TRNG bits from /dev/random before it begins to act like a PRNG.
Although this is a non-issue if only one application is consuming random
bits, it is a Bad Thing if one application (that only needs a PRNG) is
trying to coexist with another application (that really needs a TRNG).

This, too, is relatively easy to fix, but it needs fixing.

>I see no valid argument that there is anything major wrong with the
>current generator, nor that replacing it with Yarrow would be a good
>thing at all.  

I agree, depending on what you mean by "major".  I see three "areas for
improvement" 
  a) don't reseed more often than necessary, so it doesn't suck the TRNG dry,
  b) when it does reseed, it should use quantized reseeding, and
  c) get around the limitation of the width of the mixing function, perhaps
using the parallel-instances trick mentioned above, or otherwise.

Reply via email to