Re: valgrind and openssl
It doesn't matter. If you only have one bit of real entropy you are screwed - no matter whether 0 or a 10^15 bits of known data are introduced, and if it's 10^15 bits of data the attacker can't reliably guess, you are definitely better off. And, to put this in perspective, given that the uninitialized memory contents are likely unknowable off the machine - if Debian had screwed up just a little more - and left the uninitialized memory as a source, but taken out the real entropy source, (instead of taking out both), would Debian users be in better or worse shape now ?. Peter From: Thor Lancelot Simon [EMAIL PROTECTED] To: openssl-dev@openssl.org Date: 05/19/2008 05:24 PM Subject:Re: valgrind and openssl On Fri, May 16, 2008 at 11:24:45AM -0400, Geoff Thorpe wrote: On Friday 16 May 2008 00:47:52 Thor Lancelot Simon wrote: On Thu, May 15, 2008 at 11:45:14PM +0200, Bodo Moeller wrote: It may be zero, but it may be more, depending on what happened earlier in the program if the same memory locations have been in use before. This may very well include data that would be unpredictable to adversaries -- i.e., entropy; that's the point here. Unfortunately, it may also very well include data that would be highly predictable to adversaries. If feeding predictable data into a PRNG that was already well seeded with unpredictable data produced a weaker PRNG, then you have found a security bug in the PRNG and I suggest you publish. Yeah, I've heard that a few times. However, consider the pathological case, in which an adversary manages to introduce N-1 bits of known state into your PRNG which has N bits of internal state. Are you comfortable with that? For what value M are you comfortable with N - M bits of the state having been introduced by the adversary? Why? It seems to me that best practice is to not introduce such state if one can avoid it, whether one counts it into an entropy estimate or not. Thor __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
miss -hmac option in the documents about dgst
Hi experts, According to the code in apps/dgst.c, command dgst should have an option -hmac, which means use HMAC algorithm. For openssl-0.9.8g, the code is from line 193 to line 198.However, I can not find the usage of -hmac option either in man-page or in the usage-prompt printed when user types unknown option.I tested -hmac option and found that it really works. The following is the log file:# cat testabcabc# openssl sha1 -hmac 123 testabcHMAC-SHA1(testabc)= 3af0523a2c75b5d26cc2edbf850672e9160c15b0# I think this option is missed in man-page and in the usage-prompt.Would you please check it? Best RegardsYiqun Ren (Luke) _ 用手机MSN聊天写邮件看空间,无限沟通,分享精彩! http://mobile.msn.com.cn/
Re: [openssl.org #1672] Resolved: QA bug - unreachable code ./apps/s_server.c with -crl_check
On Sun, 18 May 2008 11:53:35 pm Stephen Henson via RT wrote: According to our records, your request has been resolved. If you have any further questions or concerns, please respond to this message. May as well do the documentation too - guess attached. Looking for other missing undocumented features highlights the following as missing -no_comp -hack -status_verbose -status -verify_return_error from: grep '\-[^]*) == 0)' s_server.c | cut -f 2 -d '' | xargs -n1 -I {} -t fgrep 'BIO_printf(bio_err, '{}\ s_server.c On the good side there are no documented un implemented features fgrep 'BIO_printf(bio_err, -' s_server.c | cut -f 2 -d ' ' | xargs -n1 -t -I{} fgrep -- {}\ s_server.c s_client is missing doco for: -crl_check -crl_check_all -verify_return_error -prexit -timeout -no_comp other apps untested. -- Daniel Black -- Proudly a Gentoo Linux User. Gnu-PG/PGP signed and encrypted email preferred http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x76677097 GPG Signature D934 5397 A84A 6366 9687 9EB2 861A 4ABA 7667 7097 Index: apps/s_server.c === RCS file: /v/openssl/cvs/openssl/apps/s_server.c,v retrieving revision 1.127 diff -u -b -B -r1.127 s_server.c --- apps/s_server.c 19 May 2008 06:21:05 - 1.127 +++ apps/s_server.c 19 May 2008 07:23:49 - @@ -408,6 +408,11 @@ BIO_printf(bio_err, -Verify arg - turn on peer certificate verification, must have a cert.\n); BIO_printf(bio_err, -cert arg - certificate file to use\n); BIO_printf(bio_err, (default is %s)\n,TEST_CERT); + BIO_printf(bio_err, -crl_check- check the peer certificate has not been revoked by its CA.\n \ + The CRL(s) are appended to the certificate file\n); + BIO_printf(bio_err, -crl_check_all - check the peer certificate has not been revoked by its CA\n \ + or any other CRL in the CA chain. CRL(s) are appened to the\n \ + the certificate file.\n); BIO_printf(bio_err, -certform arg - certificate format (PEM or DER) PEM default\n); BIO_printf(bio_err, -key arg - Private Key file to use, in cert file if\n); BIO_printf(bio_err, not specified (default is %s)\n,TEST_CERT); signature.asc Description: PGP signature
Re: valgrind and openssl
On Mon, May 19, 2008 at 6:00 AM, Michael Sierchio [EMAIL PROTECTED] wrote: Theodore Tso wrote: ... I'd be comfortable with an adversary knowing the first megabyte of data fed through SHA1, as long as it was followed up by at least 256 bits which the adversary *didn't* know. I'd be comfortable with an adversary knowing the first zetabyte of data fed though SHA1, as long as it was followed up by at least 256 bits which the adversary didn't know. ;-) You are being a few orders of magnitude too optimistic here, though ... ;-) A zettabyte would be 2^78 bits (less if you use the standard decimal version of zetta), but SHA-1 will only handle inputs up to 2^64 -1 bits. Bodo __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Mon, May 19, 2008 at 11:47:07AM +0200, Bodo Moeller wrote: You are being a few orders of magnitude too optimistic here, though ... ;-) A zettabyte would be 2^78 bits (less if you use the standard decimal version of zetta), but SHA-1 will only handle inputs up to 2^64 -1 bits. That's true only because size of the message in bits is appended to the end of the message to prevent the obvious extension attacks. (For people who are clueless about how SHA-1 is implemented, the message is padded out with a 1 followed by enough 0 bits so the message is congruent to 448 mod 512 bits, and then a 64-bit message size count, in big-endian format is appended to the message, and the message is processed in 16 byte --- 512 bit --- chunks. Yes that means that if the message is an exact multiple of 16 bytes, and you know the size of the message, the last 16 bytes which is run through the compression algorithm is known to the adversary; it will be a 1, followed by 447 zero bits, followed by the message size as a 64-bit big endian integer. Whoop! Whoop! Danger Will Robinson! Time to run in circles and scream about something you know nothing about!!! It must be a NSA conspiracy!) Seriously, it would be easy to change the padding scheme to accomodate bigger messages, and in fact for a PRNG you don't need to worry about extension attacks, since what you are really depending upon is the crypto hash function's compression function. With Linux's /dev/random, we dispense with message padding completely, since it's not necessary given how we are using the hash's compression function. So it wouldn't be SHA-1, but with some very minor modifications to accomodate a bigger message size count, you could run a zettabyte of known data, followed by 256 bits of known data, and the adversary would still be screwed. (And if they aren't, it's time to replace SHA-1) - Ted __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Sun, May 18, 2008 at 10:07:03PM -0400, Theodore Tso wrote: On Sun, May 18, 2008 at 05:24:51PM -0400, Thor Lancelot Simon wrote: So you're comfortable with the adversary knowing, let's say, 511 of the first 512 bits fed through SHA1? *Sigh*. Thor, you clearly have no idea how SHA-1 works. In fact, I'd be comfortable with an adversary knowing the first megabyte of data fed through SHA1, as long as it was followed up by at least 256 bits which the adversary *didn't* know. Thanks for the gratuitous insult. I'd be perfectly happy with the case you'd be happy with, too, but you took my one bit and turned it into 256. What I _wouldn't_ be happy with is a PRNG which has been fed only known data, but enough of it at startup that it agrees to provide output to the user. There are a terrible lot of these around, and pretending that stack contents are random is a great way to accidentally build them. Not feeding in data which you have a pretty darned good idea will be predictable -- potentially as the first bits in at RNG startup -- is to my mind one thing one can should do to avoid the problem. Thor __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Mon, May 19, 2008 at 6:30 PM, Thor Lancelot Simon [EMAIL PROTECTED] wrote: On Sun, May 18, 2008 at 10:07:03PM -0400, Theodore Tso wrote: On Sun, May 18, 2008 at 05:24:51PM -0400, Thor Lancelot Simon wrote: So you're comfortable with the adversary knowing, let's say, 511 of the first 512 bits fed through SHA1? *Sigh*. Thor, you clearly have no idea how SHA-1 works. In fact, I'd be comfortable with an adversary knowing the first megabyte of data fed through SHA1, as long as it was followed up by at least 256 bits which the adversary *didn't* know. Thanks for the gratuitous insult. I'd be perfectly happy with the case you'd be happy with, too, but you took my one bit and turned it into 256. What I _wouldn't_ be happy with is a PRNG which has been fed only known data, but enough of it at startup that it agrees to provide output to the user. There are a terrible lot of these around, and pretending that stack contents are random is a great way to accidentally build them. Not feeding in data which you have a pretty darned good idea will be predictable -- potentially as the first bits in at RNG startup -- is to my mind one thing one can should do to avoid the problem. No-one pretends that stacks contents are random. The OpenSSL PRNG tries to keep a tally of how much entropy has been added from external sources. I won't generate any output for key generation and such until it is happy about this amount of entropy. Those stack contents are taken into account with an entropy estimate of 0.0, i.e., not at all. Thus, after feeding those 511 known bits to the OpenSSL PRNG [*], it would still expect just as much additional seeding as before. Your failure scenario has nothing to do with the way this PRNG operates. ]*] Actually the PRNG won't take fractions of bytes, so make that 512 bits, or 504. __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Mon, May 19, 2008 at 12:30:42PM -0400, Thor Lancelot Simon wrote: Thanks for the gratuitous insult. I'd be perfectly happy with the case you'd be happy with, too, but you took my one bit and turned it into 256. But your example is NOT what openssl does. I recently had similar issue with Linux's /dev/random, where folks were similarly confused. You see, the issue that SHA-1 takes its input in 512 bits chunks of data. If you are only mixing in 256 bits of randomness, the question is what to do with the other 256 bits. You could waste CPU time zero'ing out those bits, or you can just shrug your shoulders and say, I don't care. The problem is people who don't understand say OMG!!! Ur using unitializated data!, not getting the fact that whole point was to mix in the 256 bits of entropy. The other uninitialized bits might add more information unknown to the adversary (in which case it helps), or it might not (in which case it doesn't matter, because what you're really depending on is the 256 bits of entropy from a good entropy source). Of course, you could use the argument that it's a bad idea because a clueless Debian developer might comment out the call entirely due to some misguided notion that using uninitialized data was somehow a security problem. Until last week, I would have thought that was a silly argument, but apparently there are people that clueless/stupid/ careless out there - Ted __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Thu, 15 May 2008, Geoff Thorpe wrote: I forgot to mention something; On Thursday 15 May 2008 12:38:24 John Parker wrote: It is already possible to use openssl and valgrind - just build OpenSSL with -DPURIFY, and it is quite clean. Actually on my system, just -DPURIFY doesn't satisfy valgrind. What I'm asking for is something that both satisfies valgrind and doesn't reduce the keyspace. If you're using an up-to-date version of openssl when you see this (ie. a recent CVS snapshot from our website, even if it's from a stable branch for compatibility reasons), then please post details. -DPURIFY exists to facilitate debuggers that don't like reading uninitialised data, so if that's not the case then please provide details. Note however that there are a variety of gotchas that allow you to create little leaks if you're not careful, and valgrind could well be complaining about those instead. Note that you should always build with no-asm if you're doing this kind of debug analysis. The assembly optimisations are likely to operate at granularities and in ways that valgrind could easily complain about. I don't know that this is the case, but it would certainly make sense to compare before posting a bug report. you know, this is sheer stupidity. you're suggesting that testing the no-asm code is a valid way of testing the assembly code? additionally the suggestion of -DPURIFY as a way of testing the code is also completely broken software engineering practice. any special case changes for testing means you're not testing the REAL CODE. for example if you build -DPURIFY then you also won't get notified of problems with other PRNG seeds which are supposed to be providing random *initialized* data. not to mention that a system compiled that way is insecure -- so you either have to link your binaries static (to avoid the danger of an insecure shared lib), or set up a chroot for testing. in any event YOU'RE NOT TESTING THE REAL CODE. which is to say you're wasting your time if you test under any of these conditions. openssl should not be relying on uninitialized data for anything. even if it doesn't matter from the point of view of the PRNG, it should be pretty damn clear it's horrible software engineering practice. -dean __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
dean gaudet wrote: On Thu, 15 May 2008, Geoff Thorpe wrote: I forgot to mention something; If you're using an up-to-date version of openssl when you see this (ie. a recent CVS snapshot from our website, even if it's from a stable branch for compatibility reasons), then please post details. -DPURIFY exists to facilitate debuggers that don't like reading uninitialised data, so if that's not the case then please provide details. Note however that there are a variety of gotchas that allow you to create little leaks if you're not careful, and valgrind could well be complaining about those instead. Note that you should always build with no-asm if you're doing this kind of debug analysis. The assembly optimisations are likely to operate at granularities and in ways that valgrind could easily complain about. I don't know that this is the case, but it would certainly make sense to compare before posting a bug report. you know, this is sheer stupidity. you're suggesting that testing the no-asm code is a valid way of testing the assembly code? additionally the suggestion of -DPURIFY as a way of testing the code is also completely broken software engineering practice. any special case changes for testing means you're not testing the REAL CODE. Where is the problem with it? When you add debugging print statements for finding an error, you are also not testing the real code, but when you have found the error, you can remove the print statements and are back to the real code. With -DPURIFY you can do the same. for example if you build -DPURIFY then you also won't get notified of problems with other PRNG seeds which are supposed to be providing random *initialized* data. not to mention that a system compiled that way is insecure -- so you either have to link your binaries static (to avoid the danger of an insecure shared lib), or set up a chroot for testing. Why is a system built with -DPURIFY insecure? I presume you don't understand what -DPURIFY does. in any event YOU'RE NOT TESTING THE REAL CODE. which is to say you're wasting your time if you test under any of these conditions. openssl should not be relying on uninitialized data for anything. even if OpenSSL doesn't rely on uninitialized data! it doesn't matter from the point of view of the PRNG, it should be pretty damn clear it's horrible software engineering practice. It's not at all clear that putting uninitialized data into a PRNG is bad software engineering practice, there may be circumstances where it is the last resort (imagine e.g. that the debian developer would have killed /dev/random instead). Ciao, Richard -- Dr. Richard W. Könning Fujitsu Siemens Computers GmbH __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Make ssleay_rand_bytes more deterministic
Hi, This is not a joke. Please clean up ssleay_rand_bytes: - do not mix the PID into the internal entropy pool, and - do not mix bits of the given output buffer into the internal entropy pool. This will help detecting weaknesses in the rng itself as well as in software that depends on this rng. It will further help writing test cases to improve the quality of client software. Note that the second improvement may totally break already broken client software. So please do note this in the changelog. Regards R. __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Make ssleay_rand_bytes more deterministic
On Mon, May 19, 2008 at 11:57 PM, Richard Stoughton [EMAIL PROTECTED] wrote: - do not mix the PID into the internal entropy pool, and The OpenSSL PRNG uses the PID twice: Once it is used as part of the intitial seeding on Unix machines, to get some data that might provide a little actual entropy. This part wasn't functional in the Debian version, because the content of each and every seed byte was ignored. But then the PRNG also mixes the PID into the output (via a hash). This is why the PID did influence the output bytes on Debian. The point in using the PID here is *not* to collect entropy. Rather, it is to ensure that after a fork() both processes will see different random numbers. Without this feature, many typical Unix-style server programs would be utterly broken. - do not mix bits of the given output buffer into the internal entropy pool. Note that the second improvement may totally break already broken client software. Why would it? __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: valgrind and openssl
any special case changes for testing means you're not testing the REAL CODE. You mean you're not testing *all* of the real code. That's fine, you can't debug everythign at once. for example if you build -DPURIFY then you also won't get notified of problems with other PRNG seeds which are supposed to be providing random *initialized* data. not to mention that a system compiled that way is insecure -- so you either have to link your binaries static (to avoid the danger of an insecure shared lib), or set up a chroot for testing. Right, but you know that. So you don't build with -DPURIFY if you care about things that it affects. But sometimes you care about other things. in any event YOU'RE NOT TESTING THE REAL CODE. which is to say you're wasting your time if you test under any of these conditions. You seem to think that code is one monolithic thing that doesn't consist of component parts. In fact, code does consist of component parts, and the code your actually testing may be a different component from the one you change the compilation flags on. openssl should not be relying on uninitialized data for anything. even if it doesn't matter from the point of view of the PRNG, it should be pretty damn clear it's horrible software engineering practice. No, it's not pretty damn clear. The only reason it might be horrible is because it makes the code less predictable. But in this case, predictability is explicitly undesired. Perhaps you can make a coherent argument why it's bad in this particular case, but I doubt it. This is the opposite of the typical case. -dean Good luck finding people who agree with you. I've been a professional software developer for about 18 years and I've worked on debugging with hundreds of other developers. I have *never* met anyone who shared your view. In fact, it strikes me as sheer craziness. It is akin to saying that debuggers should not exist. After all, the release program won't run with a debugger, so how can you debug with one? Clearly every difference between the test environment and the use environment is a trade-off. But being a competent engineer is about making rational trade-offs. I could go into more detail with real-world examples how following your advice above would have turned very simple efforts into Herculean ones, but what you're saying is so obviously absurd, I can't see how it could possibly be worth the effort. DS __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: valgrind and openssl
What I _wouldn't_ be happy with is a PRNG which has been fed only known data, but enough of it at startup that it agrees to provide output to the user. There are a terrible lot of these around, and pretending that stack contents are random is a great way to accidentally build them. Fortunately, OpenSSL does not pretend the stack contents are random. People sometimes accidentally shoot themselves because they handle a gun as if it was unloaded when it's loaded. But it doesn't make sense to ensure the gun is loaded just to ensure you handle it as if it was loaded. However, this is essentially your argument -- don't do something that can only help because if you relied on just that you might be in trouble. Sorry, I reject that argument. Not feeding in data which you have a pretty darned good idea will be predictable -- potentially as the first bits in at RNG startup -- is to my mind one thing one can should do to avoid the problem. Thor You are honestly arguing that the best way to avoid handling a gun as if it were unloaded is to ensure it is loaded before handling it. Isn't that clearly crazy? Why do something that can only hurt you? That you might get in trouble if you rely on it is an argument not to rely on it, not an argument not to do it. DS __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Thu, 15 May 2008, Bodo Moeller wrote: On Thu, May 15, 2008 at 11:41 PM, Erik de Castro Lopo [EMAIL PROTECTED] wrote: Goetz Babin-Ebell wrote: But here the use of this uninitialized data is intentional and the programmer are very well aware of what they did. The use of unititialized data in this case is stupid because the entropy of this random data is close to zero. It may be zero, but it may be more, depending on what happened earlier in the program if the same memory locations have been in use before. This may very well include data that would be unpredictable to adversaries -- i.e., entropy; that's the point here. on the other hand it may be a known plaintext attack. what are you guys smoking? -dean __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: valgrind and openssl
On Mon, 19 May 2008, David Schwartz wrote: any special case changes for testing means you're not testing the REAL CODE. You mean you're not testing *all* of the real code. That's fine, you can't debug everythign at once. if you haven't tested your final production binary then you haven't tested anything at all. Good luck finding people who agree with you. I've been a professional software developer for about 18 years and I've worked on debugging with i've been a professional for longer than you. big whoop. -dean __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
The problems occur on Red Hat 5.1 server x86_64. For what it's worth, I don't get errors on (updated :) Ubuntu 7.10. I do get errors even with Bodo's addition to randfile.c. I'd be happy to post the valgrind output if that would be helpful. If this is environment/OS-specific, then it's probably indicative of a libc (or valgrind, or gcc) issue with the fixed-in-the-past versions of those packages on RH5.1-64bit. If compiling and running the same openssl source on a different system doesn't give you problems, that would seem to imply that the problem appears and disappears based on elements that vary rather than elements that are invariant. :-) It's environment dependent. I'm not able to reproduce the valgrind errors even on other 64 bit RH5.1 machines. As far as I'm concerned, openssl valgrinds clean with ./config no-asm -DPURIFY: my goals are achieved. -JP __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: valgrind and openssl
On Mon, May 19, 2008 at 10:48 PM, dean gaudet [EMAIL PROTECTED] wrote: On Thu, 15 May 2008, Bodo Moeller wrote: On Thu, May 15, 2008 at 11:41 PM, Erik de Castro Lopo [EMAIL PROTECTED] wrote: Goetz Babin-Ebell wrote: But here the use of this uninitialized data is intentional and the programmer are very well aware of what they did. The use of unititialized data in this case is stupid because the entropy of this random data is close to zero. It may be zero, but it may be more, depending on what happened earlier in the program if the same memory locations have been in use before. This may very well include data that would be unpredictable to adversaries -- i.e., entropy; that's the point here. on the other hand it may be a known plaintext attack. OK, so I'll seed my random number generator with a bunch of bits you don't know, then you give me however much known plaintext you want, and I'll update the state with that too. Then, I'll start generating random numbers. If you can guess them, you win! Right? Essentially what you're claiming is that you can predict the output of SHA-1 when you know part of the input, but not all of the input. Please explain how! -JP what are you guys smoking? -dean __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager [EMAIL PROTECTED]