On Sun, 30 Jul 2000, Mark Murray wrote:

> This is a reversion to the count-entropy-and-block model which I have
> been fiercely resisting (and which argument I thought I had sucessfully
> defended).
> 
> My solution is to get the entropy gathering at a high enough rate that
> this is not necessary.

How does entropy gathering at a high enough rate solve this
particularly?  Unless you wait for reseeds while reading, it simply
doesn't matter how much entropy is being gathered at the time since
reading any amount just doesn't give you a new key.  Indeed, if you
can get enough entropy that the blocking in read would be very short,
you would still need to block in read to give it time to use the
entropy.

The only alternative I can see that you might be thinking of would be
that the user would be encouraged to only read a small amount at a
time and reading more soon later in the assumption that Yarrow will be
rekeyed.  Then this would be just forcing the user to do the blocking
manually, and non-deterministically.

So how exactly _would_ just having a high entropy gathering rate help
the case that you need a large amount of data from /dev/random with
true entropic value, not only 256 bits worth?  It's not like reseeds
would be occurring while reads were in progress; reads are too fast
for that and are splsofttq() protected, anyway.

> I also agreed to _maybe_ look at a re-engineer of the "old" code in a
> secure way if a decent algorithm could be found (I am reading some
> papers about this ATM).
> 
> M
> --
> Mark Murray
> Join the anti-SPAM movement: http://www.cauce.org

--
 Brian Fundakowski Feldman           \  FreeBSD: The Power to Serve!  /
 [EMAIL PROTECTED]                    `------------------------------'



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to