Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Wed, Nov 13, 2013 at 01:08:07AM -0500, Theodore Ts'o wrote:
> On Tue, Nov 12, 2013 at 11:23:03PM -0500, Greg Price wrote:
> > That's a good idea.  I've worried about the same thing, but hadn't
> > thought of that solution.
> 
> I think the key is that we set a default of requiring 128 bits, or 5
> minutes, with boot-line options to change the defaults.  BTW, with the
> changes that are scheduled for 3.13, this shouldn't be a problem on
> most desktops.  From my T430s laptop: [...]
> 
> So even without adding device attach times (which is on the todo list)
> the /dev/urandom pool is getting an estimated 128 bits of entropy
> almost two seconds *before* the root file system is remouted
> read/write.

Great!


> This is why I've been working improving the random driver's efficiency
> in getting the urandom pool as soon as possible, as higher priority
> than adding blocking-on-boot for /dev/urandom.

Makes sense.  Blocking on boot is only sustainable anyway if it rarely
lasts past early boot.

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Theodore Ts'o
On Tue, Nov 12, 2013 at 11:23:03PM -0500, Greg Price wrote:
> > The basic idea is that we don't want to break systems, but we do want
> > to gently coerce people to do the right thing.  Otherwise, I'm worried
> > that distros, or embedded/mobile/consume electronics engineers would
> > just patch out the check.
> 
> That's a good idea.  I've worried about the same thing, but hadn't
> thought of that solution.

I think the key is that we set a default of requiring 128 bits, or 5
minutes, with boot-line options to change the defaults.  BTW, with the
changes that are scheduled for 3.13, this shouldn't be a problem on
most desktops.  From my T430s laptop:

...
[4.446047] random: nonblocking pool is initialized
[4.542119] usb 3-1.6: New USB device found, idVendor=04f2, idProduct=b2da
[4.542124] usb 3-1.6: New USB device strings: Mfr=1, Product=2, 
SerialNumber=0
[4.542128] usb 3-1.6: Product: Integrated Camera
[4.542131] usb 3-1.6: Manufacturer: Chicony Electronics Co., Ltd.
[4.575753] SELinux: initialized (dev tmpfs, type tmpfs), uses transition 
SIDs
[4.653338] udevd[462]: starting version 175
...
[6.253131] EXT4-fs (sdc3): re-mounted. Opts: (null)

So even without adding device attach times (which is on the todo list)
the /dev/urandom pool is getting an estimated 128 bits of entropy
almost two seconds *before* the root file system is remouted
read/write.

(And this is also before fixing the rc80211 minstrel code to stop
wasting about two dozen bits of entropy at startup --- it's using
get_random_bytes even though it doesn't actually need
cryptographically secure random numbers.)

This is why I've been working improving the random driver's efficiency
in getting the urandom pool as soon as possible, as higher priority
than adding blocking-on-boot for /dev/urandom.

> And, pray tell, how will you know that you have done that?
> 
> Even the best entropy estimation algorithms are nothing but estimations,
> and min-entropy is the hardest form of entropy to estimate.

Of course it's only an estimate.  Some researchers have looked into
this and their results show that at least for x86 desktop/servers, we
appear to be conservative enough in our entropy estimation.  But
ultimately, yes, that is an issue which I am concerned about.  But I
believe that's a separable problem that we can work on separately from
other /dev/random issues --- and I'm hoping we can get some students
to study this problem on a variety of different hardware platforms and
entropy sources.

- Ted


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Tue, Nov 12, 2013 at 08:51:18PM -0800, H. Peter Anvin wrote:
> On 11/12/2013 08:37 PM, Greg Price wrote:
> > I'm thinking only of boot-time blocking.  The idea is that once
> > /dev/urandom is seeded with, say, 128 bits of min-entropy in the
> > absolute, information-theoretic sense, it can produce an infinite
> > supply (or something like 2^128 bits, which amounts to the same thing)
> > of bits that can't be distinguished from random, short of breaking or
> > brute-forcing the crypto.  So once it's seeded, it's good forever.
> 
> And, pray tell, how will you know that you have done that?
> 
> Even the best entropy estimation algorithms are nothing but estimations,

Indeed.  We do our best, but we can't be sure we have as much entropy
as we think.

The status quo here is that /dev/urandom will cheerfully answer
requests even when, by our own estimates, we only have a small amount
of entropy and anything we return will be guessable.  What Ted and I
are discussing in this thread is to have it wait until, as best we can
estimate, it has enough entropy to give an unpredictable answer.  The
status quo has the same effect as an arbitrarily too-optimistic
estimate.

The key point when it comes to the question of going *back* to
blocking is that even if the estimates are bad and in reality the
answer is guessable, it won't get any *more* guessable in the future.
If we think we have 128 bits of input min-entropy but we only have
(say) 32, meaning some states we could be in are as likely as 2^(-32),
then once an attacker sees a handful of bytes of output (*) they can
check a guess at our input and predict all our other output with as
few as 2^32 guesses, depending on the distribution.  If the attacker
sees a gigabyte or a petabyte of output, they have exactly the same
ability.  So there's no good reason to stop.

On the other hand, because our estimates may be wrong it certainly
make sense to keep feeding new entropy into the pool.  Maybe a later
seeding will have enough real entropy to make us unpredictable from
then on.  We could also use bigger and bigger reseeds, as a hedge
against our estimates being systematically too low in some
environment.

Does that make sense?  Do you have other ideas for guarding against
the case where our estimates are low?

Greg


(*) Math note: the attacker morally needs only 32 bits.  They actually
need a little more than that, because some of the (at least) 2^32
possible states probably correspond to the same first 32 bits of
output.  By standard probability bounds, for any given set of 2^32
possible input states, if the generator is good then probably no more
than ln(2^32) = 22 or so of them correspond to the same first 32 bits.
About 37 bits of output is enough to probably make all the outputs
different, and with even 64 bits = 8 bytes of output it becomes
overwhelmingly likely that all the outputs are different.  If there
are more than 2^32 possible states because the min-entropy is 32 but
some inputs are less likely, then the attacker needs even less output
to be able to confirm the most likely guesses.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread H. Peter Anvin
On 11/12/2013 08:37 PM, Greg Price wrote:
> 
> I'm thinking only of boot-time blocking.  The idea is that once
> /dev/urandom is seeded with, say, 128 bits of min-entropy in the
> absolute, information-theoretic sense, it can produce an infinite
> supply (or something like 2^128 bits, which amounts to the same thing)
> of bits that can't be distinguished from random, short of breaking or
> brute-forcing the crypto.  So once it's seeded, it's good forever.
> 

And, pray tell, how will you know that you have done that?

Even the best entropy estimation algorithms are nothing but estimations,
and min-entropy is the hardest form of entropy to estimate.

-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Tue, Nov 12, 2013 at 08:02:09PM -0800, H. Peter Anvin wrote:
> One thing, too, if we are talking about anything other than
> boot-time-only blocking: going from a nonblocking to a blocking
> condition means being able to accept a short read, and right now *many*
> users of /dev/urandom are not ready to accept a short read.

I'm thinking only of boot-time blocking.  The idea is that once
/dev/urandom is seeded with, say, 128 bits of min-entropy in the
absolute, information-theoretic sense, it can produce an infinite
supply (or something like 2^128 bits, which amounts to the same thing)
of bits that can't be distinguished from random, short of breaking or
brute-forcing the crypto.  So once it's seeded, it's good forever.

We don't even strictly need to keep adding more entropy once it's
seeded, but it's good because (a) hey, it's cheap, (b) entropy
estimation is hard, and maybe in some situations we're too optimistic
and think we're well seeded before we really are, (c) some
cryptographers like to imagine having a PRNG recover from an attacker
learning its internal state by using fresh entropy.  Other
cryptographers think (c) is a little silly because an attacker that
can do that can probably keep doing it, or take over the machine
entirely, but it's not inconceivable, and there's (a) and (b).  So we
keep adding entropy when we have it, but if we don't have new entropy
for a long time there's no need to start blocking again.

Cheers,
Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Tue, Nov 12, 2013 at 10:32:05PM -0500, Theodore Ts'o wrote:
> One of the things I've been thinking about with respect to making
> /dev/urandom block is being able to configure (via a module parameter
> which could be specified on the boot command line) which allows us to
> set a limit for how long /dev/urandom will block after which we log a
> high priority message that there was an attempt to read from
> /dev/urandom which couldn't be satisified, and then allowing the
> /dev/urandom read to succed.
> 
> The basic idea is that we don't want to break systems, but we do want
> to gently coerce people to do the right thing.  Otherwise, I'm worried
> that distros, or embedded/mobile/consume electronics engineers would
> just patch out the check.

That's a good idea.  I've worried about the same thing, but hadn't
thought of that solution.

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread H. Peter Anvin
On 11/12/2013 07:32 PM, Theodore Ts'o wrote:
> On Tue, Nov 12, 2013 at 05:40:09PM -0500, Greg Price wrote:
>>
>> Beyond these easy cleanups, I have a couple of patches queued up (just
>> written yesterday, not quite finished) to make /dev/urandom block at
>> boot until it has enough entropy, as the "Mining your P's and Q's"
>> paper recommended and people have occasionally discussed since then.
>> Those patches were definitely for after 3.13 anyway, and I'll send
>> them when they're ready.  I see some notifications and warnings in
>> this direction in the random.git tree, which is great.
> 
> One of the things I've been thinking about with respect to making
> /dev/urandom block is being able to configure (via a module parameter
> which could be specified on the boot command line) which allows us to
> set a limit for how long /dev/urandom will block after which we log a
> high priority message that there was an attempt to read from
> /dev/urandom which couldn't be satisified, and then allowing the
> /dev/urandom read to succed.
> 
> The basic idea is that we don't want to break systems, but we do want
> to gently coerce people to do the right thing.  Otherwise, I'm worried
> that distros, or embedded/mobile/consume electronics engineers would
> just patch out the check.  If we make the default be something like
> "block for 5 minutes", and then log a message, we won't completely
> break a user who is trying to login to a VM, but it will be obvious,
> both from the delay and from the kern.crit log message, that there is
> a potential problem here that a system administrator needs to worry
> about.
> 

One thing, too, if we are talking about anything other than
boot-time-only blocking: going from a nonblocking to a blocking
condition means being able to accept a short read, and right now *many*
users of /dev/urandom are not ready to accept a short read.

-hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Theodore Ts'o
On Tue, Nov 12, 2013 at 05:40:09PM -0500, Greg Price wrote:
> 
> Beyond these easy cleanups, I have a couple of patches queued up (just
> written yesterday, not quite finished) to make /dev/urandom block at
> boot until it has enough entropy, as the "Mining your P's and Q's"
> paper recommended and people have occasionally discussed since then.
> Those patches were definitely for after 3.13 anyway, and I'll send
> them when they're ready.  I see some notifications and warnings in
> this direction in the random.git tree, which is great.

One of the things I've been thinking about with respect to making
/dev/urandom block is being able to configure (via a module parameter
which could be specified on the boot command line) which allows us to
set a limit for how long /dev/urandom will block after which we log a
high priority message that there was an attempt to read from
/dev/urandom which couldn't be satisified, and then allowing the
/dev/urandom read to succed.

The basic idea is that we don't want to break systems, but we do want
to gently coerce people to do the right thing.  Otherwise, I'm worried
that distros, or embedded/mobile/consume electronics engineers would
just patch out the check.  If we make the default be something like
"block for 5 minutes", and then log a message, we won't completely
break a user who is trying to login to a VM, but it will be obvious,
both from the delay and from the kern.crit log message, that there is
a potential problem here that a system administrator needs to worry
about.

- Ted


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Mon, Nov 11, 2013 at 11:24:44PM -0500, Theodore Ts'o wrote:
> My apologies for not being able to get to this patch series before the
> patch window opened --- this week has been crazy.  None of the changes
> seem to be especially critical, and a number of the patches don't
> apply cleanly to the random.git tree (some of the spelling fixes have
> already been fixed, for example), which has patches queued up for the
> upcoming merge window.  So I'm going to defer applying these patches
> for 2.13, and I'd ask you if you'd be willing to rebase these patches
> against the random.git tree, so they can be pulled in after the merge
> window closes.

Ah, I hadn't spotted the random.git tree!  Looks like 'dev' at
v3.11-20-g392a546 is the tip -- is that right?  Rebasing now, and I'll
follow up with the result.

I agree that none of the changes are especially critical.  There is
one bugfix, and the rest are code cleanups.

Beyond these easy cleanups, I have a couple of patches queued up (just
written yesterday, not quite finished) to make /dev/urandom block at
boot until it has enough entropy, as the "Mining your P's and Q's"
paper recommended and people have occasionally discussed since then.
Those patches were definitely for after 3.13 anyway, and I'll send
them when they're ready.  I see some notifications and warnings in
this direction in the random.git tree, which is great.

Cheers,
Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Theodore Ts'o
On Thu, Nov 07, 2013 at 06:57:25PM -0500, Greg Price wrote:
> 
> I recently read through the random number generator's code.
> This series has fixes for some minor things I spotted.
> 
> Four of the patches touch comments only.  Four simplify code without
> changing its behavior (total diffstat: 35 insertions, 73 deletions),
> and one is a trivial signedness fix.  Two patches change the locking
> and lockless concurrency control in account(), including one bugfix.
> The bug is related to the one Jiri found and fixed in May.

Hi Greg,

My apologies for not being able to get to this patch series before the
patch window opened --- this week has been crazy.  None of the changes
seem to be especially critical, and a number of the patches don't
apply cleanly to the random.git tree (some of the spelling fixes have
already been fixed, for example), which has patches queued up for the
upcoming merge window.  So I'm going to defer applying these patches
for 2.13, and I'd ask you if you'd be willing to rebase these patches
against the random.git tree, so they can be pulled in after the merge
window closes.

Thanks,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Theodore Ts'o
On Thu, Nov 07, 2013 at 06:57:25PM -0500, Greg Price wrote:
 
 I recently read through the random number generator's code.
 This series has fixes for some minor things I spotted.
 
 Four of the patches touch comments only.  Four simplify code without
 changing its behavior (total diffstat: 35 insertions, 73 deletions),
 and one is a trivial signedness fix.  Two patches change the locking
 and lockless concurrency control in account(), including one bugfix.
 The bug is related to the one Jiri found and fixed in May.

Hi Greg,

My apologies for not being able to get to this patch series before the
patch window opened --- this week has been crazy.  None of the changes
seem to be especially critical, and a number of the patches don't
apply cleanly to the random.git tree (some of the spelling fixes have
already been fixed, for example), which has patches queued up for the
upcoming merge window.  So I'm going to defer applying these patches
for 2.13, and I'd ask you if you'd be willing to rebase these patches
against the random.git tree, so they can be pulled in after the merge
window closes.

Thanks,

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Mon, Nov 11, 2013 at 11:24:44PM -0500, Theodore Ts'o wrote:
 My apologies for not being able to get to this patch series before the
 patch window opened --- this week has been crazy.  None of the changes
 seem to be especially critical, and a number of the patches don't
 apply cleanly to the random.git tree (some of the spelling fixes have
 already been fixed, for example), which has patches queued up for the
 upcoming merge window.  So I'm going to defer applying these patches
 for 2.13, and I'd ask you if you'd be willing to rebase these patches
 against the random.git tree, so they can be pulled in after the merge
 window closes.

Ah, I hadn't spotted the random.git tree!  Looks like 'dev' at
v3.11-20-g392a546 is the tip -- is that right?  Rebasing now, and I'll
follow up with the result.

I agree that none of the changes are especially critical.  There is
one bugfix, and the rest are code cleanups.

Beyond these easy cleanups, I have a couple of patches queued up (just
written yesterday, not quite finished) to make /dev/urandom block at
boot until it has enough entropy, as the Mining your P's and Q's
paper recommended and people have occasionally discussed since then.
Those patches were definitely for after 3.13 anyway, and I'll send
them when they're ready.  I see some notifications and warnings in
this direction in the random.git tree, which is great.

Cheers,
Greg
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Theodore Ts'o
On Tue, Nov 12, 2013 at 05:40:09PM -0500, Greg Price wrote:
 
 Beyond these easy cleanups, I have a couple of patches queued up (just
 written yesterday, not quite finished) to make /dev/urandom block at
 boot until it has enough entropy, as the Mining your P's and Q's
 paper recommended and people have occasionally discussed since then.
 Those patches were definitely for after 3.13 anyway, and I'll send
 them when they're ready.  I see some notifications and warnings in
 this direction in the random.git tree, which is great.

One of the things I've been thinking about with respect to making
/dev/urandom block is being able to configure (via a module parameter
which could be specified on the boot command line) which allows us to
set a limit for how long /dev/urandom will block after which we log a
high priority message that there was an attempt to read from
/dev/urandom which couldn't be satisified, and then allowing the
/dev/urandom read to succed.

The basic idea is that we don't want to break systems, but we do want
to gently coerce people to do the right thing.  Otherwise, I'm worried
that distros, or embedded/mobile/consume electronics engineers would
just patch out the check.  If we make the default be something like
block for 5 minutes, and then log a message, we won't completely
break a user who is trying to login to a VM, but it will be obvious,
both from the delay and from the kern.crit log message, that there is
a potential problem here that a system administrator needs to worry
about.

- Ted


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread H. Peter Anvin
On 11/12/2013 07:32 PM, Theodore Ts'o wrote:
 On Tue, Nov 12, 2013 at 05:40:09PM -0500, Greg Price wrote:

 Beyond these easy cleanups, I have a couple of patches queued up (just
 written yesterday, not quite finished) to make /dev/urandom block at
 boot until it has enough entropy, as the Mining your P's and Q's
 paper recommended and people have occasionally discussed since then.
 Those patches were definitely for after 3.13 anyway, and I'll send
 them when they're ready.  I see some notifications and warnings in
 this direction in the random.git tree, which is great.
 
 One of the things I've been thinking about with respect to making
 /dev/urandom block is being able to configure (via a module parameter
 which could be specified on the boot command line) which allows us to
 set a limit for how long /dev/urandom will block after which we log a
 high priority message that there was an attempt to read from
 /dev/urandom which couldn't be satisified, and then allowing the
 /dev/urandom read to succed.
 
 The basic idea is that we don't want to break systems, but we do want
 to gently coerce people to do the right thing.  Otherwise, I'm worried
 that distros, or embedded/mobile/consume electronics engineers would
 just patch out the check.  If we make the default be something like
 block for 5 minutes, and then log a message, we won't completely
 break a user who is trying to login to a VM, but it will be obvious,
 both from the delay and from the kern.crit log message, that there is
 a potential problem here that a system administrator needs to worry
 about.
 

One thing, too, if we are talking about anything other than
boot-time-only blocking: going from a nonblocking to a blocking
condition means being able to accept a short read, and right now *many*
users of /dev/urandom are not ready to accept a short read.

-hpa


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Tue, Nov 12, 2013 at 10:32:05PM -0500, Theodore Ts'o wrote:
 One of the things I've been thinking about with respect to making
 /dev/urandom block is being able to configure (via a module parameter
 which could be specified on the boot command line) which allows us to
 set a limit for how long /dev/urandom will block after which we log a
 high priority message that there was an attempt to read from
 /dev/urandom which couldn't be satisified, and then allowing the
 /dev/urandom read to succed.
 
 The basic idea is that we don't want to break systems, but we do want
 to gently coerce people to do the right thing.  Otherwise, I'm worried
 that distros, or embedded/mobile/consume electronics engineers would
 just patch out the check.

That's a good idea.  I've worried about the same thing, but hadn't
thought of that solution.

Greg
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Tue, Nov 12, 2013 at 08:02:09PM -0800, H. Peter Anvin wrote:
 One thing, too, if we are talking about anything other than
 boot-time-only blocking: going from a nonblocking to a blocking
 condition means being able to accept a short read, and right now *many*
 users of /dev/urandom are not ready to accept a short read.

I'm thinking only of boot-time blocking.  The idea is that once
/dev/urandom is seeded with, say, 128 bits of min-entropy in the
absolute, information-theoretic sense, it can produce an infinite
supply (or something like 2^128 bits, which amounts to the same thing)
of bits that can't be distinguished from random, short of breaking or
brute-forcing the crypto.  So once it's seeded, it's good forever.

We don't even strictly need to keep adding more entropy once it's
seeded, but it's good because (a) hey, it's cheap, (b) entropy
estimation is hard, and maybe in some situations we're too optimistic
and think we're well seeded before we really are, (c) some
cryptographers like to imagine having a PRNG recover from an attacker
learning its internal state by using fresh entropy.  Other
cryptographers think (c) is a little silly because an attacker that
can do that can probably keep doing it, or take over the machine
entirely, but it's not inconceivable, and there's (a) and (b).  So we
keep adding entropy when we have it, but if we don't have new entropy
for a long time there's no need to start blocking again.

Cheers,
Greg
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread H. Peter Anvin
On 11/12/2013 08:37 PM, Greg Price wrote:
 
 I'm thinking only of boot-time blocking.  The idea is that once
 /dev/urandom is seeded with, say, 128 bits of min-entropy in the
 absolute, information-theoretic sense, it can produce an infinite
 supply (or something like 2^128 bits, which amounts to the same thing)
 of bits that can't be distinguished from random, short of breaking or
 brute-forcing the crypto.  So once it's seeded, it's good forever.
 

And, pray tell, how will you know that you have done that?

Even the best entropy estimation algorithms are nothing but estimations,
and min-entropy is the hardest form of entropy to estimate.

-hpa

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Tue, Nov 12, 2013 at 08:51:18PM -0800, H. Peter Anvin wrote:
 On 11/12/2013 08:37 PM, Greg Price wrote:
  I'm thinking only of boot-time blocking.  The idea is that once
  /dev/urandom is seeded with, say, 128 bits of min-entropy in the
  absolute, information-theoretic sense, it can produce an infinite
  supply (or something like 2^128 bits, which amounts to the same thing)
  of bits that can't be distinguished from random, short of breaking or
  brute-forcing the crypto.  So once it's seeded, it's good forever.
 
 And, pray tell, how will you know that you have done that?
 
 Even the best entropy estimation algorithms are nothing but estimations,

Indeed.  We do our best, but we can't be sure we have as much entropy
as we think.

The status quo here is that /dev/urandom will cheerfully answer
requests even when, by our own estimates, we only have a small amount
of entropy and anything we return will be guessable.  What Ted and I
are discussing in this thread is to have it wait until, as best we can
estimate, it has enough entropy to give an unpredictable answer.  The
status quo has the same effect as an arbitrarily too-optimistic
estimate.

The key point when it comes to the question of going *back* to
blocking is that even if the estimates are bad and in reality the
answer is guessable, it won't get any *more* guessable in the future.
If we think we have 128 bits of input min-entropy but we only have
(say) 32, meaning some states we could be in are as likely as 2^(-32),
then once an attacker sees a handful of bytes of output (*) they can
check a guess at our input and predict all our other output with as
few as 2^32 guesses, depending on the distribution.  If the attacker
sees a gigabyte or a petabyte of output, they have exactly the same
ability.  So there's no good reason to stop.

On the other hand, because our estimates may be wrong it certainly
make sense to keep feeding new entropy into the pool.  Maybe a later
seeding will have enough real entropy to make us unpredictable from
then on.  We could also use bigger and bigger reseeds, as a hedge
against our estimates being systematically too low in some
environment.

Does that make sense?  Do you have other ideas for guarding against
the case where our estimates are low?

Greg


(*) Math note: the attacker morally needs only 32 bits.  They actually
need a little more than that, because some of the (at least) 2^32
possible states probably correspond to the same first 32 bits of
output.  By standard probability bounds, for any given set of 2^32
possible input states, if the generator is good then probably no more
than ln(2^32) = 22 or so of them correspond to the same first 32 bits.
About 37 bits of output is enough to probably make all the outputs
different, and with even 64 bits = 8 bytes of output it becomes
overwhelmingly likely that all the outputs are different.  If there
are more than 2^32 possible states because the min-entropy is 32 but
some inputs are less likely, then the attacker needs even less output
to be able to confirm the most likely guesses.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Theodore Ts'o
On Tue, Nov 12, 2013 at 11:23:03PM -0500, Greg Price wrote:
  The basic idea is that we don't want to break systems, but we do want
  to gently coerce people to do the right thing.  Otherwise, I'm worried
  that distros, or embedded/mobile/consume electronics engineers would
  just patch out the check.
 
 That's a good idea.  I've worried about the same thing, but hadn't
 thought of that solution.

I think the key is that we set a default of requiring 128 bits, or 5
minutes, with boot-line options to change the defaults.  BTW, with the
changes that are scheduled for 3.13, this shouldn't be a problem on
most desktops.  From my T430s laptop:

...
[4.446047] random: nonblocking pool is initialized
[4.542119] usb 3-1.6: New USB device found, idVendor=04f2, idProduct=b2da
[4.542124] usb 3-1.6: New USB device strings: Mfr=1, Product=2, 
SerialNumber=0
[4.542128] usb 3-1.6: Product: Integrated Camera
[4.542131] usb 3-1.6: Manufacturer: Chicony Electronics Co., Ltd.
[4.575753] SELinux: initialized (dev tmpfs, type tmpfs), uses transition 
SIDs
[4.653338] udevd[462]: starting version 175
...
[6.253131] EXT4-fs (sdc3): re-mounted. Opts: (null)

So even without adding device attach times (which is on the todo list)
the /dev/urandom pool is getting an estimated 128 bits of entropy
almost two seconds *before* the root file system is remouted
read/write.

(And this is also before fixing the rc80211 minstrel code to stop
wasting about two dozen bits of entropy at startup --- it's using
get_random_bytes even though it doesn't actually need
cryptographically secure random numbers.)

This is why I've been working improving the random driver's efficiency
in getting the urandom pool as soon as possible, as higher priority
than adding blocking-on-boot for /dev/urandom.

 And, pray tell, how will you know that you have done that?
 
 Even the best entropy estimation algorithms are nothing but estimations,
 and min-entropy is the hardest form of entropy to estimate.

Of course it's only an estimate.  Some researchers have looked into
this and their results show that at least for x86 desktop/servers, we
appear to be conservative enough in our entropy estimation.  But
ultimately, yes, that is an issue which I am concerned about.  But I
believe that's a separable problem that we can work on separately from
other /dev/random issues --- and I'm hoping we can get some students
to study this problem on a variety of different hardware platforms and
entropy sources.

- Ted


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 00/11] random: code cleanups

2013-11-12 Thread Greg Price
On Wed, Nov 13, 2013 at 01:08:07AM -0500, Theodore Ts'o wrote:
 On Tue, Nov 12, 2013 at 11:23:03PM -0500, Greg Price wrote:
  That's a good idea.  I've worried about the same thing, but hadn't
  thought of that solution.
 
 I think the key is that we set a default of requiring 128 bits, or 5
 minutes, with boot-line options to change the defaults.  BTW, with the
 changes that are scheduled for 3.13, this shouldn't be a problem on
 most desktops.  From my T430s laptop: [...]
 
 So even without adding device attach times (which is on the todo list)
 the /dev/urandom pool is getting an estimated 128 bits of entropy
 almost two seconds *before* the root file system is remouted
 read/write.

Great!


 This is why I've been working improving the random driver's efficiency
 in getting the urandom pool as soon as possible, as higher priority
 than adding blocking-on-boot for /dev/urandom.

Makes sense.  Blocking on boot is only sustainable anyway if it rarely
lasts past early boot.

Greg
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/