"in your case, hash 128+N samples to get, say, 127.99 bits of entropy per
hash output. N is small, under 20 I think."

Yeah this certainly inspiring with respect to milking decent entropy from
coldbootish environments. If we assume the use of a "good" hash, then the
problem reduces to one of asking how much entropy a sample is worth.

But this is where Pandora's box opens up: modern systems -- even mobile
phones -- are so complicated that autopseudorandomness can look very
convincingly like a TRNG. For instance, we could have predictable cache
stalls, bus stalls, pipeline stalls, etc. which interact like a decent PRNG
in order to render the appearance of physical entropy even in the absence
of interrupts. But we could still end up with a painfully narrow set of
possible outputs, which would still be too large to perceive. For instance,
our 128-bit random number might be worth only 70 bits, so we likely
wouldn't detect that weakness until it comes back to bite us in the future.

Massively parallel testing is the way to go. While we can't hope to cover
any decent fraction of a 128-bit space, we _can_ "smell" the quality of a
TRNG from the LMICBR test with "rootish" numbers of samples relative to the
whole space: http://jytter.blogspot.com/2012/08/characterization.html

Russell Leidich


On Sat, Nov 22, 2014 at 6:46 PM, Sandy Harris <sandyinch...@gmail.com>
wrote:

> On Sat, Nov 22, 2014 at 11:58 PM, Russell Leidich <pke...@gmail.com>
> wrote:
>
> > 1. Let's do the math. Let's assume that we have a really dumb entropy
> > extractor ... that the timing of each
> > interrupt arrives predictably, but for an error of 1 CPU clock tick, at
> > random. ... 128 interrupts gives us 128 bits of entropy. ...
> > ... let's say we hash this long timestamp stream through a
> > cryptographically wonderful PRNG, yielding 128 bits of noise. Applying
> the
> > reflexive density constant, we expect that (1-1/e) or so of the 2^128
> > _theoretically_ possible hashes will be _actually_ possible. So, roughly
> > speaking, we drop down to 127 bits of entropy. Next, adjust for the fact
> > that maybe our PRNG ain't so wonderful after all because it has unseen
> > biases, and maybe we're down to 120 bits. Whatever. We still have a
> freaking
> > strong random number at the end of the day -- all from a very coldbootish
> > system.
>
> John Denker's Turbid paper treats the math for this in some detail
> with explicit, fairly weak, assumptions about properties of the hash.
> It shows that, given a reliable figure for minimum input entropy per
> sample (in Turbid, proven, but you could use an estimate & get a
> weaker result) you can get within epsilon of full output entropy by
> using slightly more inputs.
>
> in your case, hash 128+N samples to get, say, 127.99 bits of entropy
> per hash output. N is small, under 20 I think.
> _______________________________________________
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to