Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-11 Thread Simon Josefsson
[EMAIL PROTECTED] writes:

 Simon Josefsson [EMAIL PROTECTED] wrote:
 The Linux RNG have some issues, see http://eprint.iacr.org/2006/086.
 Libgcrypt's estimates of the quality of the /dev/*random data is
 pessimistic.

 That paper deserves a longer reply, but even granting every claim it
 makes, the only things it complains about are forward secrecy (is it
 feasible to reproduce earlier /dev/*random outputs after capturing the
 internal state of the pool from kernel memory?) and entropy estimation
 (is there really as much seed entropy as /dev/random estimates?).

 The latter is only relevant to /dev/random.

Why's that?  If the entropy estimation are wrong, you may have too
little or no entropy.  /dev/urandom can't give you more entropy than
/dev/random in that case.

 But more importantly, neither complaint (nor any other claim anyone
 has ever made) questions the ability of /dev/*random's output
 compression function to produce N totally random bits if it truly has
 N bits of seed entropy.

 That may be a bit subtle, so let me state it clearly: If there are
 actually N bits of seed entropy in the /dev/urandom pool (even if
 the kernel entropy estimator has computed a different incorrect
 value), then a M-bit read from /dev/urandom will produce a uniformly
 distributed min(N,M)-bit value.  (Minus some negligible epsilon due to
 hash compression.)

 In either case, if you want K bits worth of seed entropy for your PRNG,
 it is completely and utterly pointless to read more than K bits from
 /dev/urandom.  If N = K, you will get K bits of entropy with a K-bit
 read.  If N  K, you will not get more than N bits of entropy no
 matter how much you read.

 (If you're reading from /dev/random, then you are using the kernel's
 entropy estimator and you can apply a derating factor to it.  But
 that doesn't apply to reads from /dev/urandom.)

Right, although this is irrelevant if the seed doesn't contain
sufficient entropy.  If the attacker can guess (or exhaustively try) all
the N bits of seed entropy, using it as a seed for a PRNG won't improve
things.

However, my main concern with Linux's /dev/urandom is that it is too
slow, not that the entropy estimate may be wrong.  I don't see why it
couldn't be a fast PRNG with good properties (forward secrecy) seeded by
a continuously refreshed strong seed, and that reading GB's of data from
/dev/urandom would not deplete the /dev/random entropy pool.  This would
help 'dd if=/dev/urandom of=/dev/hda' as well.

 However, I think libgcrypt's design here doesn't fit GnuTLS's needs
 well.  The problem is that libgcrypt needs to seed its internal
 randomness pool before it begins.  This is why it is reading 3kb of
 data.  One solution is for application to have a seed file, but I would
 prefer if the crypto library that GnuTLS uses would take care of that.
 Changing each and every application that uses GnuTLS to use a seeds file
 is not a good design, IMHO.

 The problem is that asking /dev/urandom for more seed bits than the
 security rating of your PRNG is, to use Linus's phrase, ugly and stupid.
 And assuming that your PRNG is more than 256-bits secure is also stupid,
 because computational complexity theory is not sufficiently developed to
 make such assertions.  (And most cryptographers will tell you that even
 256 bits is really pushing it.  We understand 64-bit security because
 we've actually done 2^64-step computations.  We're pretty comfortable
 extraolating to 128 bits, 192 bits involves increasing amounts of
 handwaving, and 256 bits is frankly wishful thinking.)

 So libgcrypt's seeding is ugly and stupid and is in desperate need of
 fixing.  Reading more bits and distilling them down only works on physical
 entropy sources, and /dev/urandom has already done that.  Doing it again
 is a complete and total waste of time.  If there are only 64 bits of
 entropy in /dev/urandom, then it doesn't matter whether you read 8 bytes,
 8 kilobytes, or 8 gigabytes; there are only 2^64 possible outputs.

 Like openssl, it should read 256 bits from /dev/urandom and stop.
 There is zero benefit to asking for more.

I'm concerned that the approach could be weak -- the quality of data
from /dev/urandom can be low if your system was just rebooted, and no
entropy has been gathered yet.  This is especially true for embedded
systems.  As it happens, GnuTLS could be involved in sending email early
in the boot process, so this is a practical scenario.

A seeds file would help here, and has been the suggestion from Werner.
If certtool used a libgcrypt seeds file, I believe it would have solved
your problem as well.

 (If you want confirmation, please ask someone you trust.  David Wagner
 at Berkeley and Ian Goldberg at Waterloo are both pretty approachable.)

I've been in a few discussions with David about /dev/random, for example
http://thread.gmane.org/gmane.comp.encryption.general/11397/focus=11456,
and I haven't noticed that we had any different opinions about this.

I don't think 

Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-11 Thread sacrificial-spam-address
Simon Josefsson [EMAIL PROTECTED] wrote:
 The Linux RNG have some issues, see http://eprint.iacr.org/2006/086.
 Libgcrypt's estimates of the quality of the /dev/*random data is
 pessimistic.

That paper deserves a longer reply, but even granting every claim it
makes, the only things it complains about are forward secrecy (is it
feasible to reproduce earlier /dev/*random outputs after capturing the
internal state of the pool from kernel memory?) and entropy estimation
(is there really as much seed entropy as /dev/random estimates?).

The latter is only relevant to /dev/random.  But more importantly,
neither complaint (nor any other claim anyone has ever made) questions
the ability of /dev/*random's output compression function to produce N
totally random bits if it truly has N bits of seed entropy.

That may be a bit subtle, so let me state it clearly: If there are
actually N bits of seed entropy in the /dev/urandom pool (even if
the kernel entropy estimator has computed a different incorrect
value), then a M-bit read from /dev/urandom will produce a uniformly
distributed min(N,M)-bit value.  (Minus some negligible epsilon due to
hash compression.)

In either case, if you want K bits worth of seed entropy for your PRNG,
it is completely and utterly pointless to read more than K bits from
/dev/urandom.  If N = K, you will get K bits of entropy with a K-bit
read.  If N  K, you will not get more than N bits of entropy no
matter how much you read.

(If you're reading from /dev/random, then you are using the kernel's
entropy estimator and you can apply a derating factor to it.  But
that doesn't apply to reads from /dev/urandom.)

 However, I think libgcrypt's design here doesn't fit GnuTLS's needs
 well.  The problem is that libgcrypt needs to seed its internal
 randomness pool before it begins.  This is why it is reading 3kb of
 data.  One solution is for application to have a seed file, but I would
 prefer if the crypto library that GnuTLS uses would take care of that.
 Changing each and every application that uses GnuTLS to use a seeds file
 is not a good design, IMHO.

The problem is that asking /dev/urandom for more seed bits than the
security rating of your PRNG is, to use Linus's phrase, ugly and stupid.
And assuming that your PRNG is more than 256-bits secure is also stupid,
because computational complexity theory is not sufficiently developed to
make such assertions.  (And most cryptographers will tell you that even
256 bits is really pushing it.  We understand 64-bit security because
we've actually done 2^64-step computations.  We're pretty comfortable
extraolating to 128 bits, 192 bits involves increasing amounts of
handwaving, and 256 bits is frankly wishful thinking.)

So libgcrypt's seeding is ugly and stupid and is in desperate need of
fixing.  Reading more bits and distilling them down only works on physical
entropy sources, and /dev/urandom has already done that.  Doing it again
is a complete and total waste of time.  If there are only 64 bits of
entropy in /dev/urandom, then it doesn't matter whether you read 8 bytes,
8 kilobytes, or 8 gigabytes; there are only 2^64 possible outputs.

Like openssl, it should read 256 bits from /dev/urandom and stop.
There is zero benefit to asking for more.

(If you want confirmation, please ask someone you trust.  David Wagner
at Berkeley and Ian Goldberg at Waterloo are both pretty approachable.)

 Possibly gnutls should implement an internal PRNG, since neither
 libgcrypt or the kernel's RNG APIs appear to be sufficient.  Patches are
 always welcome (if you transfer the copyright on it to the FSF).

Fair enough.  Are you saying that you prefer a patch to gnutls rather than
one to libgcrypt?



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-11 Thread sacrificial-spam-address
Simon Josefsson [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] writes:
 That paper deserves a longer reply, but even granting every claim it
 makes, the only things it complains about are forward secrecy (is it
 feasible to reproduce earlier /dev/*random outputs after capturing the
 internal state of the pool from kernel memory?) and entropy estimation
 (is there really as much seed entropy as /dev/random estimates?).

 The latter is only relevant to /dev/random.

 Why's that?  If the entropy estimation are wrong, you may have too
 little or no entropy.  /dev/urandom can't give you more entropy than
 /dev/random in that case.

The quality or lack thereof of the kernel's entropy estimation is relevant
only to /dev/random because /dev/urandom's operation doesn't depend on
the entropy estimates.  If you're using /dev/urandom, it doesn't matter
if the kernel's entropy estimation is right, wrong, or commented out of
the source code.

 In either case, if you want K bits worth of seed entropy for your PRNG,
 it is completely and utterly pointless to read more than K bits from
 /dev/urandom.  If N = K, you will get K bits of entropy with a K-bit
 read.  If N  K, you will not get more than N bits of entropy no
 matter how much you read.

 Right, although this is irrelevant if the seed doesn't contain
 sufficient entropy.  If the attacker can guess (or exhaustively try) all
 the N bits of seed entropy, using it as a seed for a PRNG won't improve
 things.

Um... we appear to be talking past each other.  This is not irrelevant
if the seed doesn't contain sufficient entropy; this is ABOUT how much
entropy the seed contains.  If the seed doesn't contain sufficient
entropy is the N  K case I wrote about.

Can you clarify what you mean?  How can a description of what happens in
a particular situation be irrelevant if that situation happens?


 However, my main concern with Linux's /dev/urandom is that it is too
 slow, not that the entropy estimate may be wrong.  I don't see why it
 couldn't be a fast PRNG with good properties (forward secrecy) seeded by
 a continuously refreshed strong seed, and that reading GB's of data from
 /dev/urandom would not deplete the /dev/random entropy pool.  This would
 help 'dd if=/dev/urandom of=/dev/hda' as well.

Being slow is/was one of Ted Ts'o's original design goals.  He wanted
to do in kernel space ONLY what is not feasible to do in user space.
The fast PRNG you propose is trivial to do in user space, where it
can be seeded from /dev/urandom.

An important design goal of /dev/random is that it is always available;
note there is no CONFIG_RANDOM option, even if CONFIG_EMBEDDED.
This requires that the code be kept small.  Additional features
conflict with this goal.

Some people have suggested a /dev/frandom (fast random) in the kernel
for the application you're talking about, but AFAIK the question why
should it be in the kernel has never been adequately answered.

 So libgcrypt's seeding is ugly and stupid and is in desperate need of
 fixing.  Reading more bits and distilling them down only works on physical
 entropy sources, and /dev/urandom has already done that.  Doing it again
 is a complete and total waste of time.  If there are only 64 bits of
 entropy in /dev/urandom, then it doesn't matter whether you read 8 bytes,
 8 kilobytes, or 8 gigabytes; there are only 2^64 possible outputs.

 Like openssl, it should read 256 bits from /dev/urandom and stop.
 There is zero benefit to asking for more.


 I'm concerned that the approach could be weak -- the quality of data
 from /dev/urandom can be low if your system was just rebooted, and no
 entropy has been gathered yet.  This is especially true for embedded
 systems.  As it happens, GnuTLS could be involved in sending email early
 in the boot process, so this is a practical scenario.

Again, we appear to be talking past each other.  What part of this is
weak?  libgcrypt already seeds itself from /dev/urandom.  At any given
time (such as at boot time), one of two things is true:
1) The kernel contains sufficient entropy (again, I'm not talking about
   its fallible estimates, but the unknowable truth) to satisfy the
   desired K-bit security level, or
2) It does not. 

As long as you are not willing to wait, and thus are using /dev/urandom,
reading more than K bits is pointless.  In case 1, you will get the K bits
you want.  In case 2, you will get as much entropy as there is to be had.
Reading more bytes won't get you the tinest shred of additional entropy.

If you're going to open and read from /dev/urandom, you should stop after
reading 32 bytes.  There is NEVER a good reason to read more when seeing
a cryptographic PRNG.

Reading more bytes from /dev/urandom is just loudly advertising one's
cluelessness; it is exactly as stupid as attaching a huge spoiler and
racing stripes to a Honda civic.

 A seeds file would help here, and has been the suggestion from Werner.
 If certtool used a libgcrypt seeds file, I believe it would have 

Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-11 Thread Simon Josefsson
[EMAIL PROTECTED] writes:

 Simon Josefsson [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] writes:
 That paper deserves a longer reply, but even granting every claim it
 makes, the only things it complains about are forward secrecy (is it
 feasible to reproduce earlier /dev/*random outputs after capturing the
 internal state of the pool from kernel memory?) and entropy estimation
 (is there really as much seed entropy as /dev/random estimates?).

 The latter is only relevant to /dev/random.

 Why's that?  If the entropy estimation are wrong, you may have too
 little or no entropy.  /dev/urandom can't give you more entropy than
 /dev/random in that case.

 The quality or lack thereof of the kernel's entropy estimation is relevant
 only to /dev/random because /dev/urandom's operation doesn't depend on
 the entropy estimates.  If you're using /dev/urandom, it doesn't matter
 if the kernel's entropy estimation is right, wrong, or commented out of
 the source code.

My point is that I believe that the quality of /dev/urandom depends on
the quality of /dev/random.  If you have found a problem in /dev/random,
I believe it would affect /dev/urandom as well.

 However, my main concern with Linux's /dev/urandom is that it is too
 slow, not that the entropy estimate may be wrong.  I don't see why it
 couldn't be a fast PRNG with good properties (forward secrecy) seeded by
 a continuously refreshed strong seed, and that reading GB's of data from
 /dev/urandom would not deplete the /dev/random entropy pool.  This would
 help 'dd if=/dev/urandom of=/dev/hda' as well.

 Being slow is/was one of Ted Ts'o's original design goals.  He wanted
 to do in kernel space ONLY what is not feasible to do in user space.
 The fast PRNG you propose is trivial to do in user space, where it
 can be seeded from /dev/urandom.

I don't think a PRNG seeded from /dev/urandom would be a good idea.  You
should seed a PRNG from /dev/random to make sure you get sufficient
entropy into the PRNG seed.

 An important design goal of /dev/random is that it is always available;
 note there is no CONFIG_RANDOM option, even if CONFIG_EMBEDDED.
 This requires that the code be kept small.  Additional features
 conflict with this goal.

 Some people have suggested a /dev/frandom (fast random) in the kernel
 for the application you're talking about, but AFAIK the question why
 should it be in the kernel has never been adequately answered.

So why does the kernel implement /dev/urandom?  Removing that code would
simplify the kernel further.  Obviously whatever outcome from the
original design goals no longer meet today's goals.

 So libgcrypt's seeding is ugly and stupid and is in desperate need of
 fixing.  Reading more bits and distilling them down only works on physical
 entropy sources, and /dev/urandom has already done that.  Doing it again
 is a complete and total waste of time.  If there are only 64 bits of
 entropy in /dev/urandom, then it doesn't matter whether you read 8 bytes,
 8 kilobytes, or 8 gigabytes; there are only 2^64 possible outputs.

 Like openssl, it should read 256 bits from /dev/urandom and stop.
 There is zero benefit to asking for more.


 I'm concerned that the approach could be weak -- the quality of data
 from /dev/urandom can be low if your system was just rebooted, and no
 entropy has been gathered yet.  This is especially true for embedded
 systems.  As it happens, GnuTLS could be involved in sending email early
 in the boot process, so this is a practical scenario.

 Again, we appear to be talking past each other.  What part of this is
 weak?  libgcrypt already seeds itself from /dev/urandom.

Actually, Libgcrypt reads data from both /dev/random and /dev/urandom.
The former is used when GCRY_VERY_STRONG_RANDOM is requested.

 At any given
 time (such as at boot time), one of two things is true:
 1) The kernel contains sufficient entropy (again, I'm not talking about
its fallible estimates, but the unknowable truth) to satisfy the
desired K-bit security level, or
 2) It does not. 

 As long as you are not willing to wait, and thus are using /dev/urandom,
 reading more than K bits is pointless.  In case 1, you will get the K bits
 you want.  In case 2, you will get as much entropy as there is to be had.
 Reading more bytes won't get you the tinest shred of additional entropy.

 If you're going to open and read from /dev/urandom, you should stop after
 reading 32 bytes.  There is NEVER a good reason to read more when seeing
 a cryptographic PRNG.

 Reading more bytes from /dev/urandom is just loudly advertising one's
 cluelessness; it is exactly as stupid as attaching a huge spoiler and
 racing stripes to a Honda civic.

I think you should discuss this with the libgcrypt maintainer, I can't
change the libgcrypt code even if you convince me.

If you read the libgcrypt code (cipher/rnd*.c, cipher/random.c), you
will probably understand the motivation for reading more data (even if
you may not agree with it).

 A seeds file 

Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-11 Thread Matt Mackall

On Fri, 2008-04-11 at 16:03 +0200, Simon Josefsson wrote:
 [EMAIL PROTECTED] writes:
 
  Simon Josefsson [EMAIL PROTECTED] wrote:
  [EMAIL PROTECTED] writes:
  That paper deserves a longer reply, but even granting every claim it
  makes, the only things it complains about are forward secrecy (is it
  feasible to reproduce earlier /dev/*random outputs after capturing the
  internal state of the pool from kernel memory?) and entropy estimation
  (is there really as much seed entropy as /dev/random estimates?).
 
  The latter is only relevant to /dev/random.
 
  Why's that?  If the entropy estimation are wrong, you may have too
  little or no entropy.  /dev/urandom can't give you more entropy than
  /dev/random in that case.
 
  The quality or lack thereof of the kernel's entropy estimation is relevant
  only to /dev/random because /dev/urandom's operation doesn't depend on
  the entropy estimates.  If you're using /dev/urandom, it doesn't matter
  if the kernel's entropy estimation is right, wrong, or commented out of
  the source code.
 
 My point is that I believe that the quality of /dev/urandom depends on
 the quality of /dev/random.  If you have found a problem in /dev/random,
 I believe it would affect /dev/urandom as well.

Again, the /dev/random entropy estimate is irrelevant to /dev/urandom
because it degrades to a conventional PRNG.

  Well, he calls /dev/random blindingly fast in that thread, which appears
  to differ from your opinion. :-)
 
 It was /dev/urandom, but well, you are right.  On my machine, I get
 about 3.9MB/s from /dev/urandom sustained.  That is slow.  /dev/zero
 yields around 1.4GB/s.  I'm not sure David had understood this.  The
 real problem is that reading a lot of data from /dev/urandom makes the
 /dev/random unusable, so any process that reads a lot of data from
 /dev/urandom will receive complaints from applications that reads data
 from /dev/random.  I would instead consider this a design problem in the
 kernel.

..one that's long since been fixed. Reading from /dev/urandom always
leaves enough entropy for /dev/random to reseed. If you have a steady
input of environmental entropy, /dev/random will not be starved.

-- 
Mathematics is the supreme nostalgia of our time.




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-10 Thread Daniel Kahn Gillmor
I've packaged gnutls 2.3.4 (upstream's current development version)
for my own testing, and i see the same behavior described in this
ticket using 2.3.4 on a lenny/sid i386 system (see strace and package
versions below).  So the problem isn't unique to the version in lenny.

I'm afraid I don't know enough about crypto to know why reading from
/dev/urandom (a PRNG itself, aiui) would be cryptographically worse
than implementing your own internal PRNG and seeding it from
/dev/urandom, which seems to be what this bug is suggesting would be
better.  I'd be happy to learn, though.

By comparison, openssl dhparam only reads 32 bytes from /dev/urandom
for the same task (and uses its own PRNG according to dhparam(1ssl)).

Regards,

--dkg

Here's the openssl run:

[0 [EMAIL PROTECTED] ~]$ strace -eread,open openssl dhparam 384
open(/etc/ld.so.cache, O_RDONLY)  = 3
open(/usr/lib/i686/cmov/libssl.so.0.9.8, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\220\330..., 512) = 512
open(/usr/lib/i686/cmov/libcrypto.so.0.9.8, O_RDONLY) = 3
read(3, [EMAIL PROTECTED]..., 512) = 512
open(/lib/i686/cmov/libdl.so.2, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0p\n\0\000..., 512) = 
512
open(/usr/lib/libz.so.1, O_RDONLY)= 3
read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\30..., 512) = 512
open(/lib/i686/cmov/libc.so.6, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\260e\1..., 512) = 512
open(/usr/lib/ssl/openssl.cnf, O_RDONLY|O_LARGEFILE) = 3
read(3, #\n# OpenSSL example configuratio..., 4096) = 4096
read(3, _name ]\ncountryName\t\t\t= Country ..., 4096) = 4096
read(3,  an SSL server.\n# nsCertType\t\t\t=..., 4096) = 1182
read(3, , 4096)   = 0
open(/proc/meminfo, O_RDONLY) = 3
read(3, MemTotal:   507980 kB\nMemFre..., 1024) = 728
open(/home/dkg/.rnd, O_RDONLY)= 3
read(3, \211\223\35+\244_\343\335v\225\365\340\377=\236\t\\21..., 4096) = 
1024
read(3, , 4096)   = 0
Generating DH parameters, 384 bit long safe prime, generator 2
This is going to take a long time
open(/dev/urandom, O_RDONLY|O_NOCTTY|O_NONBLOCK) = 3
read(3, \251\240*\3307\270\212\255\240\305Z\257D_\326go\24\275..., 32) = 32
++...+...++..++.++..+++..+..+.+.+.+..++*++*++*++*++*++*++*++*
open(/home/dkg/.rnd, O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
-BEGIN DH PARAMETERS-
MDYCMQClD/cztoER1Yur0rvM0VwnWH1LNjndViK73lB15gZ0JPUqUIEzYqIxwfPx
0fAs+GMCAQI=
-END DH PARAMETERS-
Process 12428 detached
[0 [EMAIL PROTECTED] ~]$ dpkg -l $(dlocate $(ldd $(which openssl) | awk '{ 
print $3 }' | grep ^/) | cut -f1 -d: | sort -u)
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed
|/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad)
||/ Name   VersionDescription
+++-==-==-
ii  libc6-i686 2.7-10 GNU C Library: Shared libraries [i686 optimi
ii  libssl0.9.80.9.8g-8   SSL shared libraries
ii  zlib1g 1:1.2.3.3.dfsg compression library - runtime
[0 [EMAIL PROTECTED] ~]$ 


And here's the certtool run:

[0 [EMAIL PROTECTED] ~]$ strace -eread,open -s12 certtool --generate-dh-params 
--bits 384
open(/etc/ld.so.cache, O_RDONLY)  = 3
open(/usr/lib/libgnutls.so.26, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/usr/lib/libz.so.1, O_RDONLY)= 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/usr/lib/libtasn1.so.3, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/usr/lib/libgcrypt.so.11, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/usr/lib/libgpg-error.so.0, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/lib/libreadline.so.5, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/lib/i686/cmov/libc.so.6, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/lib/libncurses.so.5, O_RDONLY)  = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/lib/i686/cmov/libdl.so.2, O_RDONLY) = 3
read(3, \177ELF\1\1\1\0..., 512)  = 512
open(/dev/urandom, O_RDONLY)  = 3
read(3, y\251Az\t*\254..., 120)   = 120
read(3, }\265=\345\363..., 120)  = 120
read(3, \223\325\21(\334..., 120) = 120
read(3, \250\316\350V\305..., 120)= 120
read(3, #y4\377\306\247..., 120)  = 120
read(3, \313\337\363C\213..., 120)= 120
read(3, \17\324\25\35\344..., 120)= 120
read(3, \264N\177f\263..., 120)   = 120
read(3, WV-\206\241%\246..., 120) = 120
read(3, 

Bug#475168: [Pkg-gnutls-maint] Bug#475168: certtool --generate-dh-params is ridiculously wasteful of entropy

2008-04-10 Thread sacrificial-spam-address
Daniel Kahn Gillmor [EMAIL PROTECTED] wrote:
 I've packaged gnutls 2.3.4 (upstream's current development version)
 for my own testing, and i see the same behavior described in this
 ticket using 2.3.4 on a lenny/sid i386 system (see strace and package
 versions below).  So the problem isn't unique to the version in lenny.

 I'm afraid I don't know enough about crypto to know why reading from
 /dev/urandom (a PRNG itself, aiui) would be cryptographically worse
 than implementing your own internal PRNG and seeding it from
 /dev/urandom, which seems to be what this bug is suggesting would be
 better.  I'd be happy to learn, though.

 By comparison, openssl dhparam only reads 32 bytes from /dev/urandom
 for the same task (and uses its own PRNG according to dhparam(1ssl)).

 Regards,

 --dkg

Then let me try to enlighten you.  Yes, /dev/urandom is itself a PRNG,
so if your code doesn't include a PRNG, then you can just use its output
directly.

It is a bit wasteful of the entropy that the kernel driver keeps track
of, and the kernel PRNG is particularly slow, but it's not too horrible.

But if you have implemented a cryptographic PRNG for whatever reason,
then there is no reason at all to use more seed material than reasonably
necessary.

Using stupid amounts of seed material is like putting huge spoilers on
a rice-car: pointless and makes you look stupid.  And annoys everyone
who has to share the road (entropy pool) with you.

In truth, even using 32 bytes is unnecessary in the case of generating
a D-H prime, but that's a reasonable default for all cryptographic
operations, and openssl didn't bother implementing a special case.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]