Bug#587665: [Pkg-sysvinit-devel] Bug#587665: Safety of early boot init of /dev/random seed

2010-07-16 Thread Henrique de Moraes Holschuh
On Thu, 15 Jul 2010, Matt Mackall wrote:
   Don't bother fiddling with the pool size.
  
  We don't, but local admins often do, probably in an attempt to better handle
  bursts of entropy drainage.  So, we do want to properly support non-standard
  pool sizes in Debian if we can.
 
 Unless they're manually patching their kernel, they probably aren't
 succeeding. The pool resize ioctl was disabled ages ago. But there's
 really nothing to support here: even the largest polynomial in the
 source is only 2048 bits, or 256 bytes.

Well,

cat /proc/sys/kernel/random/poolsize 
4096

And that is stock mainline 2.6.32.16 on amd64, AFAIK...

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#587665: [Pkg-sysvinit-devel] Bug#587665: Safety of early boot init of /dev/random seed

2010-07-15 Thread Henrique de Moraes Holschuh
On Mon, 05 Jul 2010, Matt Mackall wrote:
   Here are our questions:
   
   1. How much data of unknown quality can we feed the random pool at boot,
  before it causes damage (i.e. what is the threshold where we violate 
   the
  you are not goint to be any worse than you were before rule) ?
 
 There is no limit. The mixing operations are computationally reversible,
 which guarantees that no unknown degrees of freedom are clobbered when
 mixing known data.

Good.  So, whatever we do, we are never worse off than we were before we did
it, at least by design.

   2. How dangerous it is to feed the pool with stale seed data in the next
  boot (i.e. in a failure mode where we do not regenerate the seed file) 
   ?
 
 Not at all.
  
   3. What is the optimal size of the seed data based on the pool size ?
 
 1:1.

We shall try to keep it at 1:1, then.

   4. How dangerous it is to have functions that need randomness (like
  encripted network and partitions, possibly encripted swap with an
  ephemeral key), BEFORE initializing the random seed ?
 
 Depends on the platform. For instance, if you've got an unattended boot
 off a Live CD on a machine with a predictable clock, you may get
 duplicate outputs.

I.e. it is somewhat dangerous, and we should try to avoid it by design, so
we should try to init it as early as possible.  Very well.

   5. Is there an optimal size for the pool?  Does the quality of the 
   randomness
  one extracts from the pool increase or decrease with pool size?
 
 Don't bother fiddling with the pool size.

We don't, but local admins often do, probably in an attempt to better handle
bursts of entropy drainage.  So, we do want to properly support non-standard
pool sizes in Debian if we can.

   Basically, we need these answers to find our way regarding the following
   decisions:
   
   a) Is it better to seed the pool as early as possible and risk a larger 
   time
  window for problem (2) above, instead of the current behaviour where we
  have a large time window where (4) above happens?
 
 Earlier is better.
 
   b) Is it worth the effort to base the seed file on the size of the pool,
  instead of just using a constant size?  If a constant size is better,
  which size would that be? 512 bytes? 4096 bytes? 16384 bytes?
 
 512 bytes is plenty.

   c) What is the maximum seed file size we can allow (maybe based on size of
  the pool) to try to avoid problem (1) above ?
 
 Anything larger than a sector is simply wasting CPU time, but is
 otherwise harmless.

Well, a filesystem block is usually 1024 bytes, and a sector is 4096 bytes
nowadays... :-)

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#587665: [Pkg-sysvinit-devel] Bug#587665: Safety of early boot init of /dev/random seed

2010-07-15 Thread Matt Mackall
On Thu, 2010-07-15 at 20:33 -0300, Henrique de Moraes Holschuh wrote:
 On Mon, 05 Jul 2010, Matt Mackall wrote:
Here are our questions:

1. How much data of unknown quality can we feed the random pool at boot,
   before it causes damage (i.e. what is the threshold where we violate 
the
   you are not goint to be any worse than you were before rule) ?
  
  There is no limit. The mixing operations are computationally reversible,
  which guarantees that no unknown degrees of freedom are clobbered when
  mixing known data.
 
 Good.  So, whatever we do, we are never worse off than we were before we did
 it, at least by design.
 
2. How dangerous it is to feed the pool with stale seed data in the next
   boot (i.e. in a failure mode where we do not regenerate the seed 
file) ?
  
  Not at all.
   
3. What is the optimal size of the seed data based on the pool size ?
  
  1:1.
 
 We shall try to keep it at 1:1, then.
 
4. How dangerous it is to have functions that need randomness (like
   encripted network and partitions, possibly encripted swap with an
   ephemeral key), BEFORE initializing the random seed ?
  
  Depends on the platform. For instance, if you've got an unattended boot
  off a Live CD on a machine with a predictable clock, you may get
  duplicate outputs.
 
 I.e. it is somewhat dangerous, and we should try to avoid it by design, so
 we should try to init it as early as possible.  Very well.
 
5. Is there an optimal size for the pool?  Does the quality of the 
randomness
   one extracts from the pool increase or decrease with pool size?
  
  Don't bother fiddling with the pool size.
 
 We don't, but local admins often do, probably in an attempt to better handle
 bursts of entropy drainage.  So, we do want to properly support non-standard
 pool sizes in Debian if we can.

Unless they're manually patching their kernel, they probably aren't
succeeding. The pool resize ioctl was disabled ages ago. But there's
really nothing to support here: even the largest polynomial in the
source is only 2048 bits, or 256 bytes.

Basically, we need these answers to find our way regarding the following
decisions:

a) Is it better to seed the pool as early as possible and risk a larger 
time
   window for problem (2) above, instead of the current behaviour where 
we
   have a large time window where (4) above happens?
  
  Earlier is better.
  
b) Is it worth the effort to base the seed file on the size of the pool,
   instead of just using a constant size?  If a constant size is better,
   which size would that be? 512 bytes? 4096 bytes? 16384 bytes?
  
  512 bytes is plenty.
 
c) What is the maximum seed file size we can allow (maybe based on size 
of
   the pool) to try to avoid problem (1) above ?
  
  Anything larger than a sector is simply wasting CPU time, but is
  otherwise harmless.
 
 Well, a filesystem block is usually 1024 bytes, and a sector is 4096 bytes
 nowadays... :-)
 


-- 
Mathematics is the supreme nostalgia of our time.





-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#587665: [Pkg-sysvinit-devel] Bug#587665: Safety of early boot init of /dev/random seed

2010-07-05 Thread Matt Mackall
On Sat, 2010-07-03 at 13:08 -0300, Henrique de Moraes Holschuh wrote:
 (adding Petter Reinholdtsen to CC, stupid MUA...)
 
 On Sat, 03 Jul 2010, Henrique de Moraes Holschuh wrote:
  Hello,
  
  We are trying to enhance the Debian support for /dev/random seeding at early
  boot, and we need some expert help to do it right.  Maybe some of you could
  give us some enlightenment on a few issues?
  
  Apologies in advance if I got the list of Linux kernel maintainers wrong.  I
  have also copied LKML just in case.
  
  A bit of context:  Debian tries to initialize /dev/random, by restoring the
  pool size and giving it some seed material (through a write to /dev/random)
  from saved state stored in /var.
  
  Since we store the seed data in /var, that means we only feed it to
  /dev/random relatively late in the boot sequence, after remote filesystems
  are available.  Thus, anything that needs random numbers earlier than that
  point will run with whatever the kernel managed to harness without any sort
  of userspace help (which is probably not much, especially on platforms that
  clear RAM contents at reboot, or after a cold boot).
  
  We take care of regenerating the stored seed data as soon as we use it, in
  order to avoid as much as possible the possibility of reuse of seed data.
  This means that we write the old seed data to /dev/random, and immediately
  copy poolsize bytes from /dev/urandom to the seed data file.
  
  The seed data file is also regenerated prior to shutdown.
  
  We would like to clarify some points, so as to know how safe they are on
  face of certain error modes, and also whether some of what we do is
  necessary at all.  Unfortunately, real answers require more intimate
  knowledge of the theory behind Linux' random pools than we have in the
  Debian initscripts team.
  
  Here are our questions:
  
  1. How much data of unknown quality can we feed the random pool at boot,
 before it causes damage (i.e. what is the threshold where we violate the
 you are not goint to be any worse than you were before rule) ?

There is no limit. The mixing operations are computationally reversible,
which guarantees that no unknown degrees of freedom are clobbered when
mixing known data.

  2. How dangerous it is to feed the pool with stale seed data in the next
 boot (i.e. in a failure mode where we do not regenerate the seed file) ?

Not at all.
 
  3. What is the optimal size of the seed data based on the pool size ?

1:1.

  4. How dangerous it is to have functions that need randomness (like
 encripted network and partitions, possibly encripted swap with an
 ephemeral key), BEFORE initializing the random seed ?

Depends on the platform. For instance, if you've got an unattended boot
off a Live CD on a machine with a predictable clock, you may get
duplicate outputs.

  5. Is there an optimal size for the pool?  Does the quality of the 
  randomness
 one extracts from the pool increase or decrease with pool size?

Don't bother fiddling with the pool size.

  Basically, we need these answers to find our way regarding the following
  decisions:
  
  a) Is it better to seed the pool as early as possible and risk a larger time
 window for problem (2) above, instead of the current behaviour where we
 have a large time window where (4) above happens?

Earlier is better.

  b) Is it worth the effort to base the seed file on the size of the pool,
 instead of just using a constant size?  If a constant size is better,
 which size would that be? 512 bytes? 4096 bytes? 16384 bytes?

512 bytes is plenty.
 
  c) What is the maximum seed file size we can allow (maybe based on size of
 the pool) to try to avoid problem (1) above ?

Anything larger than a sector is simply wasting CPU time, but is
otherwise harmless.

-- 
Mathematics is the supreme nostalgia of our time.





-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#587665: [Pkg-sysvinit-devel] Bug#587665: Safety of early boot init of /dev/random seed

2010-07-03 Thread Henrique de Moraes Holschuh
(adding Petter Reinholdtsen to CC, stupid MUA...)

On Sat, 03 Jul 2010, Henrique de Moraes Holschuh wrote:
 Hello,
 
 We are trying to enhance the Debian support for /dev/random seeding at early
 boot, and we need some expert help to do it right.  Maybe some of you could
 give us some enlightenment on a few issues?
 
 Apologies in advance if I got the list of Linux kernel maintainers wrong.  I
 have also copied LKML just in case.
 
 A bit of context:  Debian tries to initialize /dev/random, by restoring the
 pool size and giving it some seed material (through a write to /dev/random)
 from saved state stored in /var.
 
 Since we store the seed data in /var, that means we only feed it to
 /dev/random relatively late in the boot sequence, after remote filesystems
 are available.  Thus, anything that needs random numbers earlier than that
 point will run with whatever the kernel managed to harness without any sort
 of userspace help (which is probably not much, especially on platforms that
 clear RAM contents at reboot, or after a cold boot).
 
 We take care of regenerating the stored seed data as soon as we use it, in
 order to avoid as much as possible the possibility of reuse of seed data.
 This means that we write the old seed data to /dev/random, and immediately
 copy poolsize bytes from /dev/urandom to the seed data file.
 
 The seed data file is also regenerated prior to shutdown.
 
 We would like to clarify some points, so as to know how safe they are on
 face of certain error modes, and also whether some of what we do is
 necessary at all.  Unfortunately, real answers require more intimate
 knowledge of the theory behind Linux' random pools than we have in the
 Debian initscripts team.
 
 Here are our questions:
 
 1. How much data of unknown quality can we feed the random pool at boot,
before it causes damage (i.e. what is the threshold where we violate the
you are not goint to be any worse than you were before rule) ?
 
 2. How dangerous it is to feed the pool with stale seed data in the next
boot (i.e. in a failure mode where we do not regenerate the seed file) ?
 
 3. What is the optimal size of the seed data based on the pool size ?
 
 4. How dangerous it is to have functions that need randomness (like
encripted network and partitions, possibly encripted swap with an
ephemeral key), BEFORE initializing the random seed ?
 
 5. Is there an optimal size for the pool?  Does the quality of the randomness
one extracts from the pool increase or decrease with pool size?
 
 Basically, we need these answers to find our way regarding the following
 decisions:
 
 a) Is it better to seed the pool as early as possible and risk a larger time
window for problem (2) above, instead of the current behaviour where we
have a large time window where (4) above happens?
 
 b) Is it worth the effort to base the seed file on the size of the pool,
instead of just using a constant size?  If a constant size is better,
which size would that be? 512 bytes? 4096 bytes? 16384 bytes?
 
 c) What is the maximum seed file size we can allow (maybe based on size of
the pool) to try to avoid problem (1) above ?
 
 We would be very grateful if you could help us find good answers to the
 questions above.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org