Re: The perils of security tools

2008-06-03 Thread Philipp Gühring
Hi,

 It is not an implementaion issue but a requirement of the C standard.
 To avoid buffering use

setvbuf (fp, NULL, _IONBF, 0);

 right after the fopen.

Ah! Thanks a lot!

Ok, I think that should be written into the man-pages of /dev/random and 
fgetc/fread and other related howtos.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-31 Thread Nate Lawson

On Sun, May 18, 2008 at 4:55 PM, Hal Finney [EMAIL PROTECTED] wrote:


A simple trick can be used to help immunize DSA signatures against
these kinds of failures. I first learned of this idea many years ago
from Phil Zimmermann, and a varient has been used for a long time in
PGP and probably other code, but aparently not OpenSSL. The idea is
to base the random k not just on the output of your RNG, but also on
the private key x. Something like:

k = hash (x, rng()).

Of course it is still necessary that k be uniformly distributed mod q
(the DSA subgroup prime order), so this can't be just a straight hash.
It might be a separate PRNG instance which gets seeded with the data
values shown.  But the idea is to mix in the secret key value, x, in
addition to data from the RNG.



I've used this idea before, although in the form of using the private
key as part of the PRNG seed -- which isn't of much use if the PRNG
ignores its seeding as in this case.  However, even the form

k = hash (x, rng())

isn't good enough if the PRNG is sufficiently broken.  The Debian code
generated an output that was not merely predictable, but also prone to
repetition if you run a binary multiple times.  With typically just
2^15 different byte streams from the PRNG, by the birthday paradox
you'd have to expect to have been reusing some k after around 2^8
iterations or so.  So your DSA key would still be at risk!


While mixing in more entropy is a good idea in general, I'd like to 
caution against just throwing things in without knowing the full design 
end-to-end.  For example, if the environment is an embedded device and 
hash() introduces visible power or timing side channels, you may not 
want to do this exact construction.  Most of the time it is fine, though.


DSA is especially vulnerable to all kinds of subtleties with k.  As you 
point out, it is fatal to replay k for a given private key x.  But even 
worse, it is fatal if some small number of bits of k are *predictable*. 
 This means even if the output wasn't completely predictable, but had 
merely become somewhat predictable, it would still be exploitable.


http://crypto.stanford.edu/~dabo/abstracts/dhmsb.html
http://cat.inist.fr/?aModele=afficheNcpsidt=13872268

Mark Marson at Cryptography Research has done some great work 
implementing these attacks.  They're quite practical.  I hope he'll give 
a public talk about it some day.



You could also make k message-dependant -- i.e., feed both x and k
into the hash function:

k = hash (x, rng(), m)

This avoids that problem, and is likely to remain unbreakable even if
rng() returns just some constant.  However, then you lose one
advantage of DSA, namely being able to do most of the computation in
advance, before you've even seen the message to be signed: If you've
obtained k and done the DSA exponentiation beforehand, you can create
signatures almost instantaneously; but this won't work if k depends on
the message.


This assumes the message always changes.  Isn't this just getting back 
to padding schemes, where you build something like PSS under your DSA to 
protect against signing identical messages?


Since it appears some OpenSSL people are on this list, I'd like to ask 
for more openness in the PRNG design and seeding.  The current code is 
crufty and arbitrary.  Some minor but careful additions could have 
helped reveal this bug earlier.


The code should generate warnings in the case of PURIFY being defined. 
A comment should explain the security relevance of the seeding.  For 
example:


#ifndef PURIFY
/* SECURITY: add entropy to our pool.  This is essential. (more) */
seed_PRNG(buf);
#else
#warning PRNG seeding disabled for Purify, do NOT use PRNG output!
printf(WARNING: PRNG seeding disabled for Purify, do NOT use PRNG 
output!\n);

#endif

Also, there should be a TEST_MODE_INSECURE flag that outputs a debug 
print of each time the PRNG is seeded and the data itself.  This should 
be run on a regular basis as part of automated tests.  For example:


init()
{
#ifdef TEST_MODE_INSECURE
#warning PRNG seeding debug prints enabled, do NOT use PRNG output!
printf(WARNING: PRNG seeding debug prints enabled, do NOT use PRNG 
output!\n);

#endif
}

seed_PRNG(src_name, buf)
{
#ifdef TEST_MODE_INSECURE
printf(PRNG seeding from %s: %s\n, src_name, hex_dump(buf));
#endif

... do seeding ...
}

Anyway, I hope this incident helps us all add more openness and paranoia 
to our designs.


--
Nate

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-30 Thread Werner Koch
On Wed, 28 May 2008 10:34, [EMAIL PROTECTED] said:

 Yes. Still, some people are using fopen/fread to access /dev/random, which 
 does pre-fetching on most implementations I saw, so using open/read is 
 preferred for using /dev/random.

It is not an implementaion issue but a requirement of the C standard.
To avoid buffering use

   setvbuf (fp, NULL, _IONBF, 0);

right after the fopen.


Shalom-Salam,

   Werner

-- 
Die Gedanken sind frei.  Auschnahme regelt ein Bundeschgesetz.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-28 Thread Philipp Gühring
Hi,

 (it doesn't just slow down a lot). Since /dev/random use depletes
 the pool directly, it is imperative that wasteful reads of this
 pseudo-device be avoided at all costs. 

Yes. Still, some people are using fopen/fread to access /dev/random, which 
does pre-fetching on most implementations I saw, so using open/read is 
preferred for using /dev/random.

Implementations can be rather easily checked with strace.

Best regards,
Philipp Gühring

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-28 Thread The Fungi
On Wed, May 28, 2008 at 10:34:53AM +0200, Philipp Gühring wrote:
  it is imperative that wasteful reads of this pseudo-device be
  avoided at all costs. 
 
 Yes. Still, some people are using fopen/fread to access
 /dev/random, which does pre-fetching on most implementations I
 saw, so using open/read is preferred for using /dev/random.
 
 Implementations can be rather easily checked with strace.

Oh, agreed wholeheartedly. I simply meant that *wasteful*
(gratuitous) reads of /dev/random should be avoided. Justifiable,
conservative reads of /dev/random are, of course, why it exists in
the first place!

And fopen/fread is definitely a bad idea in this case for the
reasons you point out. In general, anything which prefetches
potentially excess data in a read from /dev/random is destructive to
the entropy pool.
-- 
{ IRL(Jeremy_Stanley); PGP(9E8DFF2E4F5995F8FEADDC5829ABF7441FB84657);
SMTP([EMAIL PROTECTED]); IRC([EMAIL PROTECTED]); ICQ(114362511);
AIM(dreadazathoth); YAHOO(crawlingchaoslabs); FINGER([EMAIL PROTECTED]);
MUD([EMAIL PROTECTED]:6669); WWW(http://fungi.yuggoth.org/); }

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-27 Thread Taral
On 5/26/08, Simon Josefsson [EMAIL PROTECTED] wrote:
  For example, reading a lot of data from linux's /dev/urandom will
  deplete the entropy pool in the kernel, which effectively makes reads
  from /dev/random stall.  The two devices uses the same entropy pool.

That's a bug in the way the kernel hands out entropy to multiple
concurrent consumers. I don't think it's a semantic issue.

-- 
Taral [EMAIL PROTECTED]
Please let me know if there's any further trouble I can give you.
-- Unknown

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-27 Thread Bodo Moeller
On Sun, May 18, 2008 at 4:55 PM, Hal Finney [EMAIL PROTECTED] wrote:

 A simple trick can be used to help immunize DSA signatures against
 these kinds of failures. I first learned of this idea many years ago
 from Phil Zimmermann, and a varient has been used for a long time in
 PGP and probably other code, but aparently not OpenSSL. The idea is
 to base the random k not just on the output of your RNG, but also on
 the private key x. Something like:

 k = hash (x, rng()).

 Of course it is still necessary that k be uniformly distributed mod q
 (the DSA subgroup prime order), so this can't be just a straight hash.
 It might be a separate PRNG instance which gets seeded with the data
 values shown.  But the idea is to mix in the secret key value, x, in
 addition to data from the RNG.


I've used this idea before, although in the form of using the private
key as part of the PRNG seed -- which isn't of much use if the PRNG
ignores its seeding as in this case.  However, even the form

k = hash (x, rng())

isn't good enough if the PRNG is sufficiently broken.  The Debian code
generated an output that was not merely predictable, but also prone to
repetition if you run a binary multiple times.  With typically just
2^15 different byte streams from the PRNG, by the birthday paradox
you'd have to expect to have been reusing some k after around 2^8
iterations or so.  So your DSA key would still be at risk!

You could also make k message-dependant -- i.e., feed both x and k
into the hash function:

k = hash (x, rng(), m)

This avoids that problem, and is likely to remain unbreakable even if
rng() returns just some constant.  However, then you lose one
advantage of DSA, namely being able to do most of the computation in
advance, before you've even seen the message to be signed: If you've
obtained k and done the DSA exponentiation beforehand, you can create
signatures almost instantaneously; but this won't work if k depends on
the message.

Bodo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-27 Thread Simon Josefsson
Taral [EMAIL PROTECTED] writes:

 On 5/26/08, Simon Josefsson [EMAIL PROTECTED] wrote:
  For example, reading a lot of data from linux's /dev/urandom will
  deplete the entropy pool in the kernel, which effectively makes reads
  from /dev/random stall.  The two devices uses the same entropy pool.

 That's a bug in the way the kernel hands out entropy to multiple
 concurrent consumers. I don't think it's a semantic issue.

Do you have any references?  Several people have brought this up before
and have been told that the design with depleting the entropy pool is
intentional.

Still, the semantics of /dev/*random is not standardized anywhere, and
the current implementation is sub-optimal from a practical point of
view, so I think we are far away from an even OK situation.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-27 Thread Chad Perrin
On Mon, May 26, 2008 at 11:22:18AM +0200, Simon Josefsson wrote:
 
 For example, reading a lot of data from linux's /dev/urandom will
 deplete the entropy pool in the kernel, which effectively makes reads
 from /dev/random stall.  The two devices uses the same entropy pool.
 
 I believe a much better approach would be if /dev/urandom was a fast and
 secure PRNG, with perfect-forward-secrecy properties, and /dev/random
 was a slow device with real entropy (whatever that means..) gathered
 from the hardware.  The two devices would share little or no code.  The
 /dev/urandom PRNG seed could be fed data from /dev/random from time to
 time, or from other sources (like kernel task switching timings).  I
 believe designs like this have been proposed from time to time, but
 there hasn't been any uptake.

My understanding of the situation is that the way you get secure use of a
PRNG is by feeding it real entropy, and the way you get fast use of a
PRNG is by feeding it whatever seeds you have on-hand, regardless of
real randomness -- or just don't feed it any seeds at all, if you don't
have any on-hand.  Thus, the reason /dev/urandom is fast is that it
doesn't actually *require* real entropy, and the reason /dev/random is
cryptographically secure is that it *does* require real entropy, which
of course means that it slows down a lot when you run out of real
entropy in the pool.

Assuming I am not mistaken in my understanding of the operation of the
two randomness devices, you could probably get reasonable security and
speed overall for /dev/urandom by limiting how quickly and often it
accesses the entropy pool, hitting it once in a while at (pseudo)random
intervals within a reasonable range to seed the PRNG.  This would make it
fast unless you're taxing the entropy pool so badly with multiple
processes using /dev/urandom or some /dev/random use that there literally
is no entropy left in the pool for /dev/urandom to use at all when it
tries to hit the pool.  It would not provide perfect forward secrecy,
however, because there would be brief intervals (between hits to the
entropy pool) during which knowing the PRNG algorithm and its current
state would allow someone to predict further PRNG output until the end of
the current entropy interval.  The length of the interval, however, could
conceivably be (effectively) unknowable.

Ultimately, I think the reason nobody has implemented a /dev/urandom that
allows for fast, secure PRNG operation with perfect forward secrecy is
that it's kind of a pick n-1 situation, such as with the old saw, Fast
good, cheap; pick two.  To get cryptographically strong randomness, you
need entropy, which taxes the entropy pool.  An additional entropy pool
would need more places to *get* entropy, of course.  Essentially, giving
the characteristics of cryptographically useful randomness and perfect
forward secrecy to /dev/urandom would ultimately mean you turned it into
a duplicate of /dev/random.

It looks like you're suggesting just changing the way /dev/urandom
receives its entropy so that it happens periodically, similarly to how I
described limiting it from exhausting the entropy pool above -- but that
won't solve the problem of giving /dev/urandom strong security and
perfect forward secrecy characteristics.

. . . or is there something I missed?

-- 
Chad Perrin [ content licensed PDL: http://pdl.apotheon.org ]
Baltasar Gracian: A wise man gets more from his enemies than a fool from
his friends.


pgp0tGmcL1okT.pgp
Description: PGP signature


Re: The perils of security tools

2008-05-26 Thread Simon Josefsson
Ben Laurie [EMAIL PROTECTED] writes:

 Steven M. Bellovin wrote:
 On Sat, 24 May 2008 20:29:51 +0100
 Ben Laurie [EMAIL PROTECTED] wrote:

 Of course, we have now persuaded even the most stubborn OS that
 randomness matters, and most of them make it available, so perhaps
 this concern is moot.

 Though I would be interested to know how well they do it! I did
 have some input into the design for FreeBSD's, so I know it isn't
 completely awful, but how do other OSes stack up?

 I believe that all open source Unix-like systems have /dev/random
 and /dev/urandom; Solaris does as well.

 I meant: how good are the PRNGs underneath them?

For the linux kernel, there is a paper:

http://eprint.iacr.org/2006/086

Another important aspect is the semantics of the devices: None of the
/dev/*random devices are standardized anywhere (as far as I know).
There semantics can and do differ.  This is a larger practical problem.

For example, reading a lot of data from linux's /dev/urandom will
deplete the entropy pool in the kernel, which effectively makes reads
from /dev/random stall.  The two devices uses the same entropy pool.

I believe a much better approach would be if /dev/urandom was a fast and
secure PRNG, with perfect-forward-secrecy properties, and /dev/random
was a slow device with real entropy (whatever that means..) gathered
from the hardware.  The two devices would share little or no code.  The
/dev/urandom PRNG seed could be fed data from /dev/random from time to
time, or from other sources (like kernel task switching timings).  I
believe designs like this have been proposed from time to time, but
there hasn't been any uptake.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-26 Thread IanG

Steven M. Bellovin wrote:

On Sat, 24 May 2008 20:29:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

Of course, we have now persuaded even the most stubborn OS that 
randomness matters, and most of them make it available, so perhaps

this concern is moot.

Though I would be interested to know how well they do it! I did have 
some input into the design for FreeBSD's, so I know it isn't

completely awful, but how do other OSes stack up?


I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.



Yes, but with different semantics:

 /dev/urandom is a compatibility nod
 to Linux. On Linux, /dev/urandom will
 produce lower quality output if the
 entropy pool drains, while
 /dev/random will prefer to block and
 wait for additional entropy to be
 collected.  With Yarrow, this choice
 and distinction is not necessary,
 and the two devices behave
 identically. You may use either.

(random(4) from Mac OSX.)

Depending on where you are in the security paranoia 
equation, the differences matter little or a lot.  If doing 
medium level security, it's fine to outsource the critical 
components to the OS, and accept any failings.  If doing 
paranoid-level stuff, then best to implement ones own mix 
and just stir in the OS level offering.  That way we reduce 
the surface area for lower-layer config attacks like the 
Debian adventure.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-26 Thread zooko

On May 24, 2008, at 9:18 PM, Steven M. Bellovin wrote:


I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.


By the way, Solaris is an open source Unix-like system nowadays.  ;-)

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-26 Thread Ivan Krstić

On May 25, 2008, at 6:02 AM, Ben Laurie wrote:

I meant: how good are the PRNGs underneath them?



Not a direct answer to your question, but somewhat relevant as context  
is Michal Zalewski's analysis of TCP/IP sequence number predictability  
across operating systems:


http://lcamtuf.coredump.cx/newtcp/

It's several years out of date, however.

--
Ivan Krstić [EMAIL PROTECTED] | http://radian.org

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-25 Thread Steven M. Bellovin
On Sat, 24 May 2008 20:29:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

 Of course, we have now persuaded even the most stubborn OS that 
 randomness matters, and most of them make it available, so perhaps
 this concern is moot.
 
 Though I would be interested to know how well they do it! I did have 
 some input into the design for FreeBSD's, so I know it isn't
 completely awful, but how do other OSes stack up?
 
I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-25 Thread Ben Laurie

Steven M. Bellovin wrote:

On Sat, 24 May 2008 20:29:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

Of course, we have now persuaded even the most stubborn OS that 
randomness matters, and most of them make it available, so perhaps

this concern is moot.

Though I would be interested to know how well they do it! I did have 
some input into the design for FreeBSD's, so I know it isn't

completely awful, but how do other OSes stack up?


I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.


I meant: how good are the PRNGs underneath them?

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-24 Thread Eric Young

   #ifndef PURIFY
 MD_Update(m,buf,j); /* purify complains */
   #endif

   

I just re-checked, this code was from SSLeay, so it pre-dates OpenSSL
taking over from me
(about 10 years ago, after I was assimilated by RSA Security).

So in some ways I'm the one at fault for not being clear enough about
why 'purify complains' and why it was not relevant.
Purify also incorrectly companied about a construct used in the digest
gathering code which functioned correctly, but purify was
also correct (a byte in a read word was uninitialised, but it was later
overwritten by a shifted byte).

One of the more insidious things about Purify is that once its
complaints are investigated, and deemed irrelevant (but left in the
library),
anyone who subsequently runs purify on an application linking in the
library will get the same purify warning.
This leads to rather distressed application developers.  Especially if
their company has a policy of 'no purify warnings'.

One needs to really ship the 'warning ignore' file for purify (does
valgrind have one?).

I personally do wonder why, if the original author had purify related
comments, which means he was aware of the issues,
but had still left the code in place, the reviewer would not consider
that the code did some-thing important enough to
ignore purify's complaints.

eric

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-24 Thread Ben Laurie

Eric Young wrote:

  #ifndef PURIFY
MD_Update(m,buf,j); /* purify complains */
  #endif

  


I just re-checked, this code was from SSLeay, so it pre-dates OpenSSL
taking over from me
(about 10 years ago, after I was assimilated by RSA Security).

So in some ways I'm the one at fault for not being clear enough about
why 'purify complains' and why it was not relevant.
Purify also incorrectly companied about a construct used in the digest
gathering code which functioned correctly, but purify was
also correct (a byte in a read word was uninitialised, but it was later
overwritten by a shifted byte).

One of the more insidious things about Purify is that once its
complaints are investigated, and deemed irrelevant (but left in the
library),
anyone who subsequently runs purify on an application linking in the
library will get the same purify warning.
This leads to rather distressed application developers.  Especially if
their company has a policy of 'no purify warnings'.

One needs to really ship the 'warning ignore' file for purify (does
valgrind have one?).

I personally do wonder why, if the original author had purify related
comments, which means he was aware of the issues,
but had still left the code in place, the reviewer would not consider
that the code did some-thing important enough to
ignore purify's complaints.


I think the core point is that 10+ years ago, when this code was 
written, randomness was actually quite hard to come by. Daemons like EGD 
had to be installed and fed and cared for. So, even a little entropy 
from uninitialised memory (I use the quotes because I do appreciate 
that the memory probably has somewhat predictable content) was worth having.


Of course, we have now persuaded even the most stubborn OS that 
randomness matters, and most of them make it available, so perhaps this 
concern is moot.


Though I would be interested to know how well they do it! I did have 
some input into the design for FreeBSD's, so I know it isn't completely 
awful, but how do other OSes stack up?


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Victor Duchovni
On Tue, May 13, 2008 at 02:10:45PM +0100, Ben Laurie wrote:

 [Moderator's note: A quick reminder: please use ASCII except if you
 need Unicode to spell your name right. Microsoft's proprietary quote
 marks are not a standard and don't look right on non-Microsoft
 displays. I edited them out of this by hand. --Perry]
 
 Debian have a stunning example of how blindly fixing problems pointed 
 out by security tools can be disastrous.

Upstream authors can take defensive measures against ill-advised
patches of this sort. For a while, distributions were in the habit
of Patching the code that Postfix uses to learn the its own hostname.
Invariably, they botched it. The code now reads:

  /* get_hostname - look up my host name */

  const char *get_hostname(void)
  {
charnamebuf[MAXHOSTNAMELEN + 1];

/*
 * The gethostname() call is not (or not yet) in ANSI or POSIX, but it is
 * part of the socket interface library. We avoid the more politically-
 * correct uname() routine because that has no portable way of dealing
 * with long (FQDN) hostnames.
 *
 * DO NOT CALL GETHOSTBYNAME FROM THIS FUNCTION. IT BREAKS MAILDIR DELIVERY
 * AND OTHER THINGS WHEN THE MACHINE NAME IS NOT FOUND IN /ETC/HOSTS OR
 * CAUSES PROCESSES TO HANG WHEN THE NETWORK IS DISCONNECTED.
 *
 * POSTFIX NO LONGER NEEDS A FULLY QUALIFIED HOSTNAME. INSTEAD POSTFIX WILL
 * USE A DEFAULT DOMAIN NAME LOCALDOMAIN.
 */
if (my_host_name == 0) {
  /* DO NOT CALL GETHOSTBYNAME FROM THIS FUNCTION */
  if (gethostname(namebuf, sizeof(namebuf))  0)
msg_fatal(gethostname: %m);
  namebuf[MAXHOSTNAMELEN] = 0;
  /* DO NOT CALL GETHOSTBYNAME FROM THIS FUNCTION */
  if (valid_hostname(namebuf, DO_GRIPE) == 0)
msg_fatal(unable to use my own hostname);
  /* DO NOT CALL GETHOSTBYNAME FROM THIS FUNCTION */
  my_host_name = mystrdup(namebuf);
}
return (my_host_name);
  }

The addition of /* DO NOT CALL GETHOSTBYNAME FROM THIS FUNCTION */
every couple of lines appears to have solved the problem: it deliberately
breaks all prior patches (context diff overlaps), and strongly signals
that the code must not be messed with.

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Alexander Klimov
On Tue, 13 May 2008, Ben Laurie wrote:
 Had Debian done this in this case, we (the OpenSSL Team) would have
 fallen about laughing

I think we all should not miss this ROTFL experience:

Original code (see ssleay_rand_add)

 
http://svn.debian.org/viewsvn/pkg-openssl/openssl/trunk/rand/md_rand.c?rev=140view=markup,

the patch

 
http://svn.debian.org/viewsvn/pkg-openssl/openssl/trunk/rand/md_rand.c?rev=141view=diffr1=141r2=140p1=openssl/trunk/rand/md_rand.cp2=/openssl/trunk/rand/md_rand.c

and the end result

 
http://svn.debian.org/viewsvn/pkg-openssl/openssl/trunk/rand/md_rand.c?rev=141view=markup

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Ben Laurie

Paul Hoffman wrote:

I'm confused about two statements here:

At 2:10 PM +0100 5/13/08, Ben Laurie wrote:
The result of this is that for the last two years (from Debian's 
Edgy release until now), anyone doing pretty much any crypto on 
Debian (and hence Ubuntu) has been using easily guessable keys. This 
includes SSH keys, SSL keys and OpenVPN keys.


. . .

[2] Valgrind tracks the use of uninitialised memory. Usually it is bad 
to have any kind of dependency on uninitialised memory, but OpenSSL 
happens to include a rare case when its OK, or even a good idea: its 
randomness pool. Adding uninitialised memory to it can do no harm and 
might do some good, which is why we do it. It does cause irritating 
errors from some kinds of debugging tools, though, including valgrind 
and Purify. For that reason, we do have a flag (PURIFY) that removes 
the offending code. However, the Debian maintainers, instead of 
tracking down the source of the uninitialised memory instead chose to 
remove any possibility of adding memory to the pool at all. Clearly 
they had not understood the bug before fixing it.


The second bit makes it sound like the stuff that the Debian folks 
blindly removed was one, possibly-useful addition to the entropy pool. 
The first bit makes it sound like the stuff was absolutely critical to 
the entropy of produced keys. Which one is correct?


They removed _all_ entropy addition to the pool, with the exception of 
the PID, which is mixed in at a lower level.


--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Paul Hoffman

At 10:25 AM +0100 5/15/08, Ben Laurie wrote:

Paul Hoffman wrote:

I'm confused about two statements here:

At 2:10 PM +0100 5/13/08, Ben Laurie wrote:
The result of this is that for the last two years (from Debian's 
Edgy release until now), anyone doing pretty much any crypto on 
Debian (and hence Ubuntu) has been using easily guessable keys. 
This includes SSH keys, SSL keys and OpenVPN keys.


. . .

[2] Valgrind tracks the use of uninitialised memory. Usually it is 
bad to have any kind of dependency on uninitialised memory, but 
OpenSSL happens to include a rare case when its OK, or even a good 
idea: its randomness pool. Adding uninitialised memory to it can 
do no harm and might do some good, which is why we do it. It does 
cause irritating errors from some kinds of debugging tools, 
though, including valgrind and Purify. For that reason, we do have 
a flag (PURIFY) that removes the offending code. However, the 
Debian maintainers, instead of tracking down the source of the 
uninitialised memory instead chose to remove any possibility of 
adding memory to the pool at all. Clearly they had not understood 
the bug before fixing it.


The second bit makes it sound like the stuff that the Debian folks 
blindly removed was one, possibly-useful addition to the entropy 
pool. The first bit makes it sound like the stuff was absolutely 
critical to the entropy of produced keys. Which one is correct?


They removed _all_ entropy addition to the pool, with the exception 
of the PID, which is mixed in at a lower level.


I take it that these are not 128-bit, non-monotonic PIDs. :-)

The bigger picture is that distributions who are doing local mods 
should really have an ongoing conversation with the software's 
developers. Even if the developers don't want to talk to you, a 
one-way conversation of we're doing this, we're doing that could be 
useful.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Ben Laurie

Paul Hoffman wrote:

At 10:25 AM +0100 5/15/08, Ben Laurie wrote:

Paul Hoffman wrote:

I'm confused about two statements here:

At 2:10 PM +0100 5/13/08, Ben Laurie wrote:
The result of this is that for the last two years (from Debian's 
Edgy release until now), anyone doing pretty much any crypto on 
Debian (and hence Ubuntu) has been using easily guessable keys. This 
includes SSH keys, SSL keys and OpenVPN keys.


. . .

[2] Valgrind tracks the use of uninitialised memory. Usually it is 
bad to have any kind of dependency on uninitialised memory, but 
OpenSSL happens to include a rare case when its OK, or even a good 
idea: its randomness pool. Adding uninitialised memory to it can do 
no harm and might do some good, which is why we do it. It does cause 
irritating errors from some kinds of debugging tools, though, 
including valgrind and Purify. For that reason, we do have a flag 
(PURIFY) that removes the offending code. However, the Debian 
maintainers, instead of tracking down the source of the 
uninitialised memory instead chose to remove any possibility of 
adding memory to the pool at all. Clearly they had not understood 
the bug before fixing it.


The second bit makes it sound like the stuff that the Debian folks 
blindly removed was one, possibly-useful addition to the entropy 
pool. The first bit makes it sound like the stuff was absolutely 
critical to the entropy of produced keys. Which one is correct?


They removed _all_ entropy addition to the pool, with the exception of 
the PID, which is mixed in at a lower level.


I take it that these are not 128-bit, non-monotonic PIDs. :-)

The bigger picture is that distributions who are doing local mods should 
really have an ongoing conversation with the software's developers. Even 
if the developers don't want to talk to you, a one-way conversation of 
we're doing this, we're doing that could be useful.


That doesn't scale very well, though - which is why my position is that 
they should avoid local mods.


--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Hal Finney
Ben Laurie alerts us to the recent bug in Debian distributions of
OpenSSL which caused the RNG to have almost no entropy. The distribution
mistakenly commented out the call that added seeding and most other
sources of entropy to the RNG state. This is requiring many keys to
be re-issued.

One of the more unfortunate aspects of this bug is that it affects not
only keys generated on systems with the weak RNG, but also even securely
generated DSA keys, if the keys were used for signing on systems with
the bug. DSA keys are vulnerable to weak RNGs not only at keygen time
but at any later time that signatures are created. This causes those
keys to be far more vulnerable to problems in RNGs.

The reason is the DSA signature equation sk - xr = h, where h is the
message hash, r and s are signature components, x is the private key,
and k is a random value chosen uniquely per message. If k is guessable,
as potentially was the case with this recent bug, then x can be deduced
since the other values are typically sent in the clear.

A simple trick can be used to help immunize DSA signatures against
these kinds of failures. I first learned of this idea many years ago
from Phil Zimmermann, and a varient has been used for a long time in
PGP and probably other code, but aparently not OpenSSL. The idea is
to base the random k not just on the output of your RNG, but also on
the private key x. Something like:

k = hash (x, rng()).

Of course it is still necessary that k be uniformly distributed mod q
(the DSA subgroup prime order), so this can't be just a straight hash.
It might be a separate PRNG instance which gets seeded with the data
values shown.  But the idea is to mix in the secret key value, x, in
addition to data from the RNG.

In this way, if the rng data is predictable but the secret key is unknown,
k should still be unguessable. And if your mixing function is good then
this should not leak any information about x, especially in the usual
case where the rng is of good quality.

A variant on this idea protects against a separate problem, where k
is unguessable but somehow the same k value is used for two separate
signatures. This again lets the attacker deduce x because he will observe
two instances of the DSA signature equation above, with all values known
except k and x, and since k is the same, this is two equations with two
unknowns and allows recovering both values.

To immunize against this failure, include the message hash h in the mixing
function that generates k:

k = hash (x, h, rng()).

Now, if the RNG does produce identical output, h will typically differ
among signatures, again producing unique and unguessable k values. And
if h is the same for two messages in this form, k will be the same, but
then r and s will be the same as well, and the second signature will be
an exact match of the first and not leak new information.

I think these techniques are widely known among implementors but I did
not see them in HAC so I thought it was worth reminding the community
here. OpenSSL is such a widely used crypto library that it would be
especially valuable for it to consider incorporating these mechanisms. It
would have saved some considerable pain as administrators who use OpenSSH
(which depends on OpenSSL) DSA keys now are forced to consider whether
they may be vulnerable to the bug even if their primary servers were not
exposed to it, since any client out there may have generated insecure
signatures and inadvertantly revealed secret keys.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-22 Thread Paul Hoffman
More interesting threadage about the issue here: 
http://taint.org/2008/05/13/153959a.html, particularly in the 
comments.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]