It looks like there are two different questions being addressed:

* How should OpenSSL architect the random number generators?

* How should these be seeded?

I agree with John that the larger threat comes from poor seeding, but both have 
to be considered.  A weak architecture will be just as bad from a security 
standpoint.


The pull request referenced was about the former.  It seems like the consensus 
is leaning toward cascaded RNGs with reseeding.  There are open questions about 
which algorithm(s) to use, where to place the RNG nodes in the hierarchy and 
integration with threading.

Given the variety of RNGs available, would an EVP RNG interface make sense?  
With a safe default in place (and no weak generators provided), the decision 
can be left to the user.
A side benefit is that the unit tests could implement a simple fully 
deterministic generator and the code for producing known sequences removed from 
the code base.


There seems to be more discussion about the seeding.

The quality of the seed sources is critical.  The problem is how to quantify 
this.  Determining a true lower bound for an entropy source is a difficult 
problem.  Turbid does this, I'm not aware of any other daemons that do and 
precious few HRNGs that attempt it (let alone prove it).

Estimates of the minimum entropy are available in NIST's SP800-90B draft.  
These aren't ideal and can easily be tricked but they do represent a reputable 
starting point.  As I mentioned, they appear to be well designed with sound 
statistics backing them.

Defence in depth seems prudent: independent sources with agglomeration and 
whitening.


We shouldn't trust the user to provide entropy.  I've seen what is typically 
provided.  Uninitialised buffers aren't random.  User inputs (mouse and 
keyboard) likewise aren't very good.  That both are still being suggested is 
frustrating.  I've seen worse suggestions, some to the effect that "time(NULL) 
^ getpid()" is too good and just time() is enough.


As for specific questions and comments:

John Denker wrote:
> If you trust the ambient OS to provide a seed, why not
> trust it for everything, and not bother to implement an
> openssl-specfic RNG at all?

I can think of a few possibilities:

* Diversifying the sources provides resistance to compromise of individual 
sources.  Although a full kernel compromise is unrecoverable, a kernel bug that 
leaked the internal pools in a read only manner isn't unforeseeable.

* Not all operating systems have good RNGs.

* Draining the kernel's entropy pools is unfriendly behaviour, other processes 
will typically want some randomness too.

* At boot time the kernel pools are empty (low or no quality).  This compounds 
when several things require seeding.

* Performance is also a consideration, although with a gradual collection 
strategy this should be less of a concern.  Except at start up.


John Denker wrote:
> Are you designing to resist an output-text-only attack?  Or do you
> also want "some" ability to recover from a compromise of
> the PRNG internal state?

Yes to both ideally.  Feeding additional seed material into a PRNG could help 
in the latter case.  It depends on how easy it is to compromise the PRNG state. 
 If it is trivial in terms of resources, it isn't going to be recoverable.  A 
short reseed interval can be effective against a slow attack (but not always).


John Denker wrote:
> Is there state in a persistent file, or only in memory?

Rich Salz replied:
> Only memory.

I'd propose secure memory.  Leaking RNG state is very bad.


John Denker wrote:
>> Do you think we need to use multiple sources of randomness?
> Quality is more important than quantity.

Given the difficulty of quantifying the quality of sources, I think that having 
multiple sources is prudent.  Finding out that a source is low quality when it 
was believed to be good is less embarrassing when you've other sources in 
operation.  Likewise, if a source fails.

I agree that one good source trumps a pile of low quality sources.  The aim is 
to have several good sources.

A proven source is better than one that is believed to be good.  These can fail 
(kill -STOP anyone) and a fall-back is not necessarily bad.


John Denker wrote:
> Cascading is absolutely necessary, and must be done "just so", to
> prevent track-and-hold attacks.  One of the weaknesses in the
> Enigma, exploited to the hilt by Bletchley Park, was that each
> change in the internal state was too small.  A large state space
> is not sufficient if the state /changes/ are small.

My suggestion was to XOR the entire state space with new random bits.  Some 
RNGs will have better reseeding procedures and they should be used.  I agree 
completely that small additions aren't useful.


Kurt Roeckx wrote:
> We currently have a ~/.rnd file. We should probably get rid of it.

Perhaps make it optional?  Embedded systems have trouble with random state at 
boot and a ~/.rnd file or equivalent is beneficial here.  I've implemented this 
to seed /dev/random a couple of times now.  It isn't ideal but it is better 
than nothing.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

Reply via email to