Re: [Cryptography] real random numbers

2013-09-16 Thread Joachim Strömbergson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Aloha!

John Denker wrote:
 On 09/15/2013 03:49 AM, Kent Borg wrote:
 
 When Bruce Schneier last put his hand to designing an RNG he 
 concluded that estimating entropy is doomed. I don't think he
 would object to some coarse order-of-magnitude confirmation that
 there is entropy coming in, but I think trying to meter entropy-in
 against entropy-out will either leave you starved or fooled.
 
 That's just completely backwards.  In the world I live in, people get
 fooled because they /didn't/ do the analysis, not because they did.
 
 I very much doubt that Bruce concluded that accounting is doomed. 
 If he did, it would mark a dramatic step backwards from his work on
 the commendable and influential Yarrow PRNG: J. Kelsey, B. Schneier,
 and N. Ferguson (1999) http://www.schneier.com/paper-yarrow.pdf

What Kent is probably referring to is the Fortuna RNG which is a
successor to Yarrow. One difference between Yarrow and Fortuna is the
lack of the estimator in Fortuna.

As Bruce and Ferguson states in chapter 10.3 of Practical Cryptography
(where Fortuna is described in good detail) [1]:

Fortuna solves the problem of having to define entropy estimators by
getting rid of them.

[1] https://www.schneier.com/book-practical.html

- -- 
Med vänlig hälsning, Yours

Joachim Strömbergson - Alltid i harmonisk svängning.

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlI29rMACgkQZoPr8HT30QEqRwCfb4+6/K6AtK04cvtFU4KCVGwB
VA8AoKWhC8lOsru/xIkac71My0jIzjI9
=fx8M
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] real random numbers

2013-09-15 Thread John Denker
Previously I said we need to speak more carefully about these
things.  Let me start by taking my own advice:

Alas on 09/14/2013 12:29 PM, I wrote:
 a) In the linux random device, /any/ user can mix stuff into the
 driver's pool.  This is a non-privileged operation.  The idea is that
 it can't hurt and it might help.  So far so good. b) Contributions of
 the type just mentioned do *not* increase the driver's estimate of
 the entropy in the pool.  If you want to increase the
 entropy-estimate, you need to issue a privileged ioctl.

 ... step (a) cannot get anybody into trouble.  Step (b) gets you into
 trouble if you claim credit for more entropy than was actually
 contributed.

Actually it's one step more complicated than that.  Step (a) 
causes problems if you /underestimate/ the entropy content of
what you contributed.  The problem is that the end-user
application will try to read from the RNG and will stall
due to insufficient entropy available.

Step (b) has the opposite problem: You get into trouble if 
you /overestimate/ the entropy of what you have contributed.
This causes insidious security problems, because your allegedly 
random numbers are not as random as you think.


On 09/14/2013 03:12 PM, John Kelsey wrote:

 Your first two categories are talking about the distribution of 
 entropy--we assume some unpredictability exists, and we want to 
 quantify it in terms of bits of entropy per bit of output.  That's a 
 useful distinction to make, and as you said, if you can get even a 
 little entropy per bit and know how much you're getting, you can get 
 something very close to ideal random bits out.
 
 Your second two categories are talking about different kinds of 
 sources--completely deterministic, or things that can have
 randomness but don't always.  That leaves out sources that always
 have a particular amount of entropy (or at least are always expected
 to!).

That very much depends on what you mean by expected.
 -- An ill-founded expectation is little more than a wild guess,
  and it is not useful for critical applications.
 ++ OTOH a well-founded statistical expectation value is just
  what we need, and it moves the source firmly out of the
  squish category.

I say again, a squish is not reliably predictable /and/ not
reliably unpredictable.  If you have *any* trustworthy nonzero
lower bound on the entropy content, it's not a squish.

On the other hand, again and again people latch onto something
that is not reliably predictable, call it random, and try
to do something with it without establishing any such lower
bound.  This has led to disaster again and again.

There is a ocean of difference between not reliably predictable
and reliably unpredictable.

 I'd say even the squish category can be useful in two way
 
 a.  If you have sensible mechanisms for collecting entropy, they 
 can't hurt and sometimes help.  For example, if you sample an 
 external clock, most of the time, the answer may be deterministic, 
 but once in awhile, you may get some actual entropy, in the sense 
 that the clock drift is sufficient that the sampled value could have 
 one of two values, and an attacker can't know which.

However, alas, the good guys don't know how much either, so they 
don't know much to take credit for.  An underestimate causes the 
RNG to stall, and an overestimate means the output is not as random
as it should be.  I vehemently recommend against risking either of
these failures.

I emphasize that there are two operations that must be considered
carefully:  
 1) Mixing stuff into the driver's pool, and
 2) taking credit for it ... the right amount of credit.

One without the other is strictly amateur hour.

 b.  If you sample enough squishes, you may accumulate a lot of 
 entropy.

You might, or you might not.  In an adversarial situation, this
is begging for trouble.  I vehemently recommend against this.

 Some ring oscillator designs are built like this, hoping to 
 occasionally sample the transition in value on one of the 
 oscillators.

Hope is not an algorithm.

 The idea is that the rest of the behavior of the oscillators might
 possibly be predicted by an attacker, but what value gets read when
 you sample a value that's transitioning between a 0 and a 1 is really
 random, changed by thermal noise.

So quantify the thermal noise already.  It sounds like you are
using the oscillator as a crude digitizer, digitizing the thermal
noise, which is the first step in the right direction.  The next 
step to come up with a hard lower bound on the entropy density.

OTOH when you plug in the actual numbers, you will probably find 
that the oscillator is incredibly inefficient compared to a 
soundcard.

My main point is, there is a perfectly reasonable formalism for 
analyzing these things, so that hope is not required.

Secondarily, there is a huge industry mass-producing soundcards
at a very low price.  Very often, a soundcard is build into the
mainboard, whether you ask for it or not.  So in 

Re: [Cryptography] real random numbers

2013-09-15 Thread Jerry Leichter
On Sep 14, 2013, at 5:38 PM, Kent Borg wrote:
 Things like clock skew are usually nothing but squish ... not reliably 
 predictable, but also not reliably unpredictable. I'm not interested in 
 squish, and I'm not interested in speculation about things that might be 
 random. 
 
 I see theoretical the enemy of the good here.
 
 The term squish is entertaining, but be careful that once you paint away 
 with your broad brush that you don't dismiss engineering realities that 
 matter.

 And once we have built such vaguely secure systems, why reject entropy 
 sources within those systems, merely because they you think they look like 
 squish?  If there is a random component, why toss it out?  You seem to 
 respect using hashing to condition and stretch entropy--though why any 
 existing hash shouldn't also fall to your squish generalization, I don't 
 know.
You've completely missed what Denker was getting at with squish.  Squish 
never applies to a fully characterized, deterministic component like a hash.  
Squish is an unknown unknown:  Data that you don't understand, so you think 
it might be random, but you really can't be sure.  Consider the example he 
responded to, that comparing the clocks on the CPU and on the sound card 
should be usable as a source of randomness.  If you dig in to what should be 
usable means here, it comes down to:  Both clocks show some degree of random 
variation, and the random variation is uncorrelated.  That might be true - or 
it might not:  Perhaps there's some path you haven't thought of through the 
power supply that tends to synchronize the two.  Lack of imagination on the 
analyst's part does not equate to lack of correlation (or other failure modes) 
on the system's part!  (In fact, the world is full of unexpected couplings 
between nominally independent events.  I've debugged and fought f
 ailures in systems built on the unsupported assumption that things will 
smooth out on average.  They are always unexpected, and can be difficult to 
find after the fact.  And ... people don't seem to learn the lesson:  The next 
system makes the same bad assumptions.)

As Denker said:  Adding squish as a source of confusion in a well implemented 
mixer is at worst harmless - if you want to do it, go ahead.  But adding it as 
a source of an entropy estimate is wrong.  Either you have some way of 
estimating the entropy based on real physical modeling, or you're just making 
things up - and just making things up is not the way to build a secure system.
   -- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-15 Thread ianG

On 15/09/13 00:38 AM, Kent Borg wrote:

On 09/14/2013 03:29 PM, John Denker wrote:



And once we have built such vaguely secure systems, why reject entropy
sources within those systems, merely because they you think they look
like squish?  If there is a random component, why toss it out?



He's not tossing it out, he's saying that it is no basis for measurement.

Think of the cryptography worldview -- suppliers of black boxes (MDs, 
encryptions, etc) to the software world are obsessed about the 
properties of the black box, and suppliers want them to be reliable and 
damn near perfect.  No come back, no liability.


Meanwhile, in the software world, we think very differently.  We want 
stuff that is good enough not perfect.  That's because we know that 
systems are so darn complex that the problems are going to occur 
elsewhere -- either other systems that don't have the cryptographic 
obsession, our own mistakes or user issues.


E.g., SHA1 is close to perfect for almost all software needs, but for 
the cryptographers, it isn't good enough any more!  We must have SHA2, 
SHA3, etc.  The difference for most real software is pretty much like 
how many bit angels can dance on a pinhead.


As John is on the supplier side, he needs a measurement that is totally 
reliable and totally accurate.  Squish must therefore be dropped from 
that measurement.


...

You dismiss things like clock skew, but when I start to imagine ways
to defeat interrupt timing as an entropy source, your Johnson noise
source also fails: by the time the adversary has enough information
about what is going on inside the GHz-plus box to infer precise clock
phase, precise interrupt timing, and how fast the CPU responds...they
have also tapped into the code that is counting your Johnson.



Once the adversary has done that, all bets are off.  The adversary can 
now probably count the keys bits in use, and is probably at the point 
where they can interfere at the bit level.


Typically, we don't build designs to that threat model, that way lies 
TPMs and other madness.  In risk terms, we accept that risk, the user 
loses, and we move on.




There are a lot of installed machines that can get useful entropy from
existing sources, and it seems you would have the man who is dying of
thirst die, because the water isn't pure enough.



It is a problem.  Those on the supplier side of the divide cannot 
deliver the water unless it is pure enough.  Those on the builder side 
don't need pure water when everything else is so much sewage.  But oh 
well, life goes on.




Certainly, if hardware manufacturers want to put dedicated entropy
sources in machines, I approve, and I am even going to use rdrand as
*part* of my random numbers, but in the mean time, give the poor servers
a sip of entropy.  (And bravo to Linux distributions that overruled the
purist Linux maintainer who thought no entropy was better than poorly
audited entropy, we are a lot more secure because of them.)



Right.  The more the merrier.



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-15 Thread Kent Borg

On 09/15/2013 10:19 AM, John Kelsey wrote:
But those are pretty critical things, especially (a). You need to know 
whether it is yet safe to generate your high-value keypair. For that, 
you don't need super precise entropy estimates, but you do need at 
least a good first cut entropy estimate--does this input string have 
20 bits of entropy or 120 bits? 


Yes, the time I was part of designing a physical RNG product (for use in 
real gambling, for real money) we made sure to not only sweep up all the 
entropy sources we could, and not only mixed in fixed information such 
as MAC addresses to further make different machines different, our 
manufacturing procedures included pre-seeding the stored pool with data 
from Linux computer that had a mouse and keyboard and lots of human input.


We did not try to do entropy accounting, but did worry about having enough.

We also were going way overboard on security thinking, far exceeding 
regulatory requirements for any jurisdiction we looked at.  I don't know 
if it every shipped to a customer, but we got all the approvals 
necessary so it could have...


I do agree that, though a Linux box might make keys on its first boot, 
it should be used interactively first, and then generate keys.


Again Ubuntu (at least a desktop install) doesn't include sshd by 
default, you have to decide to install it, and at that point, if there 
is a human setting up things with a keyboard and mouse, there should be 
a lot of entropy.  Ubuntu server installations might be different, and 
I would be very worried about automatic provisioning of server machines 
in bulk.


-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-15 Thread Kent Borg
John Kelsey wrote:
 I think the big problem with (b) is in quantifying the entropy you get.

Maybe don't.

When Bruce Schneier last put his hand to designing an RNG he concluded that 
estimating entropy is doomed. I don't think he would object to some coarse 
order-of-magnitude confirmation that there is entropy coming in, but I think 
trying to meter entropy-in against entropy-out will either leave you starved or 
fooled.

-kb
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread Kent Borg

On 09/14/2013 03:29 PM, John Denker wrote:
Things like clock skew are usually nothing but squish ... not reliably 
predictable, but also not reliably unpredictable. I'm not interested 
in squish, and I'm not interested in speculation about things that 
might be random. 


I see theoretical the enemy of the good here.

The term squish is entertaining, but be careful that once you paint 
away with your broad brush that you don't dismiss engineering realities 
that matter.


I can see there is an appeal to entropy sources that you can work back 
to some quantum origin, but even they will fail horribly if you don't 
build a larger system that is secure, and secure at some non-trivial 
radius.  (How much Tempest-hardening are you going to do?)


And once we have built such vaguely secure systems, why reject entropy 
sources within those systems, merely because they you think they look 
like squish?  If there is a random component, why toss it out?  You 
seem to respect using hashing to condition and stretch entropy--though 
why any existing hash shouldn't also fall to your squish 
generalization, I don't know.  It seems that you would reject using a 
coin toss as a source of entropy because coins are not perfectly fair 
and there are biases in their results.  So?  You respect hashing, why 
not clean the output with a good hash?


You dismiss things like clock skew, but when I start to imagine ways 
to defeat interrupt timing as an entropy source, your Johnson noise 
source also fails: by the time the adversary has enough information 
about what is going on inside the GHz-plus box to infer precise clock 
phase, precise interrupt timing, and how fast the CPU responds...they 
have also tapped into the code that is counting your Johnson.


There are a lot of installed machines that can get useful entropy from 
existing sources, and it seems you would have the man who is dying of 
thirst die, because the water isn't pure enough.


Certainly, if hardware manufacturers want to put dedicated entropy 
sources in machines, I approve, and I am even going to use rdrand as 
*part* of my random numbers, but in the mean time, give the poor servers 
a sip of entropy.  (And bravo to Linux distributions that overruled the 
purist Linux maintainer who thought no entropy was better than poorly 
audited entropy, we are a lot more secure because of them.)



-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread John Kelsey
Your first two categories are talking about the distribution of entropy--we 
assume some unpredictability exists, and we want to quantify it in terms of 
bits of entropy per bit of output.  That's a useful distinction to make, and as 
you said, if you can get even a little entropy per bit and know how much you're 
getting, you can get something very close to ideal random bits out.

Your second two categories are talking about different kinds of 
sources--completely deterministic, or things that can have randomness but don't 
always.  That leaves out sources that always have a particular amount of 
entropy (or at least are always expected to!).  

I'd say even the squish category can be useful in two ways:

a.  If you have sensible mechanisms for collecting entropy, they can't hurt and 
sometimes help.  For example, if you sample an external clock, most of the 
time, the answer may be deterministic, but once in awhile, you may get some 
actual entropy, in the sense that the clock drift is sufficient that the 
sampled value could have one of two values, and an attacker can't know which.  

b.  If you sample enough squishes, you may accumulate a lot of entropy.  Some 
ring oscillator designs are built like this, hoping to occasionally sample the 
transition in value on one of the oscillators.  The idea is that the rest of 
the behavior of the oscillators might possibly be predicted by an attacker, but 
what value gets read when you sample a value that's transitioning between a 0 
and a 1 is really random, changed by thermal noise.  

I think the big problem with (b) is in quantifying the entropy you get.  I also 
think that (b) describes a lot of what commonly gets collected by the OS and 
put into the entropy pool.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-13 Thread John Denker
Executive summary:

The soundcard on one of my machines runs at 192000 Hz.  My beat-up 
old laptop runs at 96000.  An antique server runs at only 48000. 
There are two channels and several bits of entropy per sample.
That's /at least/ a hundred thousand bits per second of real 
industrial-strength entropy -- the kind that cannot be cracked, 
not by the NSA, not by anybody, ever.

Because of the recent surge in interest, I started working on a 
new version of turbid, the software than manages the soundcard 
and collects the entropy.  Please give me another week or so.

The interesting point is that you rally want to rely on the
laws of physics.  Testing the output of a RNG can give an upper 
bound on the amount of entropy, but what we need is a lower bound, 
and only physics can provide that.  The physics only works if 
you /calibrate/ the noise source.  A major selling point of turbid
is the calibration procedure.  I'm working to make that easier for 
non-experts to use.


Concerning radioactive sources:

My friend Simplicio is an armchair cryptographer.  He has a proposal 
to replace triple-DES with quadruple-rot13.  He figures that since it
is more complicated and more esoteric, it must be better.

Simplicio uses physics ideas in the same way.  He thinks radioactivity 
is the One True Source of randomness.  He figures that since it is
more complicated and more esoteric, it must be better.

In fact, anybody who knows the first thing about the physics involved
knows that quantum noise and thermal noise are two parts of the same
elephant.  Specifically, there is only one physical process, as shown
by figure 1 here:
  http://www.av8n.com/physics/oscillator.htm
Quantum noise is the low-temperature asymptote, and thermal noise is
the high-temperature asymptote of the /same/ physical process.

So ... could we please stop talking about radioactive random number
generators and quantum random number generators?  It's embarrassing.

It is true but irrelevant that somebody could attempt a denial-of-service
attack against a thermal-noise generator by pouring liquid nitrogen
over it.  This is irrelevant several times over because:
 a) Any decrease in temperature would be readily detectable, and the 
  RNG could continue to function.  Its productivity would go down by
  a factor of 4, but that's all.
 b) It would be far more effective to pour liquid nitrogen over other
  parts of the computer, leading to complete failure.
 c) It would be even more effective (and more permanent) to pour sulfuric 
  acid over the computer.
 d) Et cetera.

The point is, if the attacker can get that close to your computer, you 
have far more things to worry about than the temperature of your noise 
source.  Mathematical cryptographers should keep in mind the proverb 
that says: If you don't have physical security, you don't have security.

To say the same thing in more positive terms:  If you have any halfway-
reasonable physical security, a thermal noise source is just fine, 
guaranteed by the laws of physics.

In practice, the nonidealities associated with radioactive noise are 
far greater than with thermal noise sources ... not to mention the cost 
and convenience issues.

As I have been saying for more than 10 years, several hundred thousand 
bits per second of industrial-strength entropy is plenty for a wide
range of practical applications.  If anybody needs more than that, we
can discuss it ... but in any case, there are a *lot* of services out 
there that would overnight become much more secure if they started 
using a good source of truly random bits.

The main tricky case is a virtual private server hosted in the cloud.
You can't add a real soundcard to a virtual machine.  My recommendation 
for such a machine is to use a high-quality PRNG and re-seed it at 
frequent intervals.  This is a chicken-and-egg situation:
 a) If you have /enough/ randomness stored onboard the VPS, you can 
  set up a secure pipe to a trusted randomness server somewhere else,
  and get more randomness that way.
 b) OTOH if the VPS gets pwned once, it might be pwned forever, because 
  the bad guys can watch the new random bits coming in, at which point
  the bits are no longer random.
 c) On the third hand, if the bad guys drop even one packet, ever,
  you can recover at that point.
 d) I reckon none of this is worth worrying about too much, because
  at some point the bad guys just strong-arm the hosting provider
  and capture your entire virtual machine.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography