I may be way off but it seems to me that a colonel level RNG can only
pick up entropy from boot which means hardware noise. Isn't that easy
to beat with an acoustic attack? Maybe user space is a bit better
because you have more entropy to be gathered.
On 5/5/2016 7:58 PM, Russell Leidich wrote:
All else being equal, I would prefer to have my TRNG in the kernel,
for the aforementioned reasons of memory access security.
But in the real world, this distinction is minor. More significantly,
kernel TRNGs differ from userspace ones in their use of hardware
sources of randomness, such as network packet contents and mouse
movements. Conventionally, this is considered to be a good thing
because it provides a diversity of entropy sources which are difficult
to model, and must all be modelled in different ways. By comparison,
my own userspace TRNGs (Jytter and Enranda) rely on only the CPU timer.
I would argue, however, that using hardware randomness is
fundamentally less secure at the only level that actually matters
(overall real world systemic security), even if the ultimate source of
the randomness is perfect in the quantum sense. The reason has nothing
to do with the source itself. Rather, it's the bus between the CPU and
the source which is so horribly exploitable, to say nothing of the
bugs invited by touching so much hardware. It takes little
sophistication or money to insert a probe between the two, or better
yet, to manufacture a motherboard with such a tap built in. Sure, a
CPU manufacturer could record accesses to the timer which resides on
die, but then they would have the problem of needing to conspire with
motherboard vendors to radiate that data back to the cloud, perhaps
via a network chip which "accidentally" contacts a particular IP on
rare occasion. But less conspiratorially speaking, a bus tap could be
installed in an evil maid attack using a screwdriver. For that matter,
it's not too difficult to imagine a drone which could fly into a data
center and deposit a high precision electromagnetic sensor on the
outside of a server rack, sensitive to the frequencies used on the
frontside bus. At least in principle, Fourier analysis could be used
to reverse engineer the signals travelling across the bus from the 2D
slice of radiation incident to the receiving surface of the sensor.
MRI machines have been using similar radio wave decoding math for
decades, with obvious success.
However, said evil maid could not read the inputs to a timer-based
TRNG so easily, because doing so would generally require the root
password or an OS vulnerability or a JTAG connection to the CPU pads,
in which case all of encryption is moot anyway. If said TRNG resided
in userspace, then in theory a security hole in an application could
facilitate remote compromise, but the same could be said of
applications which read /dev/random, then store the results in their
userspace memory.
If I were to use any hardware other than the CPU timer, I would want
an encrypted connection between the hardware source and the CPU core,
leaving as little decrypted raw entropy in memory or higher level
caches as possible. For example, CPU debug registers would be
preferable to a line in the level 2 cache. There is also the question
of key exchange spoofing across that leaky bus hierarchy. And where
would we get the entropy to encrypt that connection? D'oh! Ah, but we
could use trusted platform modules! Uhm, no, because it's much easier
to create weak hardware RNGs which look solid than to engineer the CPU
to poison timer-based TRNGs with predictable timestamps, because those
timestamps would stick out like sore thumbs. And also no because TPMs
reside on the same leaky bus, usually LPC which is indirectly
connected to PCIe, affording two attacks for the price of one. I'm
more sanguine about the sort of TRNG registers that DJ mentioned,
which are readable in userspace but reside on-die, than any external
solutions, although I don't trust them completely because weakening
them in an indetectable manner would require much less sophisticated
engineering than weakening the timestamp; they might be combined for
greater security.
One criticism against timer-based TRNGs is that when booting very
simple devices disconnected from the network, their outputs will
become more predictable. This is probably true, but part of the
validation and testing of the TRNG would be to run it under such
circumstances (probably in relative cryostasis) and appropriately
adjust the lower bound entropy. It's much easier to perform such
characterization for a timer-based TRNG than a "kitchen sink" TRNG
susceptible to the unknown statistical vagaries of a wide diversity of
hardware.
In other words, it's better to have weak entropy that you know to be
weak, and can scale to strength, than strong entropy which is
susceptible to unpredictable massive downspikes in quality, especially
insofar as concerns hardware which was never intended to behave as a
TRNG, e.g. a spinning disc. What is hard for the attacker to model is
also hard for the designer to model.
It's obviously appealing, then, to think of hybridizing timer and
device entropy. All else being equal, this would seem to be the most
secure approach. If we disregard the negative implications for
bandwidth (because when you're monitoring that hardware output, you're
missing out on timer entropy), there is the issue of ensuring
homogenous mixing: we can't substitute audio entropy for keyboard
entropy, etc., because the whole point is that we don't trust any one
source in isolation. So we need to ensure that each source is mixed
into each random output, directly or indirectly, which then further
constrains bandwidth. Otherwise, a burst of predictable behavior, such
as an error storm, might suddenly arise from one particular device,
which was not contemplated in the model developed by the programmer of
the TRNG. To the extent that such error storms might be induced by
attack, we would have a serious problem. And there's the issue of
expanding the OS security surface by sticking our fingers into so many
driver interfaces. And then there's the risk that hardware traffic
radiating across the bus would also give an attacker a hint as to when
you will read the timer. So then you downthrottle the entropy value of
the timer, yet further constraining bandwidth...
I will be the first to admit that in the present crisis of entropy
starvation, which can only get worse with the rise of IoT, the most
successful approach may well end up being the one which is
sufficiently fast and has passed the test of time, rather than the one
which is theoretically the most secure. Starfish are evolutionarily
successful because they're simple and highly adaptable, even though
they're not very smart.
For those who wish to develop hardware TRNGs, I would recommend that
you at least quantify the randomness of your raw entropy stream by
analyzing it with Dyspoissometer or the like. This won't prove that
it's not all pseudorandom, but it will help to catch overly optimistic
assumptions about said stream, especially in rare operating modes in
which it becomes temporarily much more predictable.
On Thu, May 5, 2016 at 9:40 AM, shawn wilson <[email protected]
<mailto:[email protected]>> wrote:
Just reflecting on the Linux RNG thread a bit ago, is there any
technical reason to have RNG in kernel space?
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography