Re: Couldn't obtain random bytes in sshd - problem in RAND_poll?

2008-08-10 Thread Theodore Tso
On Sun, Aug 10, 2008 at 07:28:30PM -0700, David Schwartz wrote:
>   I didn't say you are vulnerable to a MITM attack that compromises the
> endpoint. I said that if the endpoint is compromised, you are vulnerable to
> MITM attacks. The attacker need not compromise the endpoint himself. He may
> discover that a poorly-designed endpoint (even though it implement SSL
> perfectly) is in fact compromised.

At this point, you've just spent reams and reams of electrons stating
the obvious.  If the endpoint is compromised, no protocol is going to
help you.  This is true regardless of whether you are talking about
SSLv3, or Kerberos (if I have a copy of your server's keytab file, I
can forge arbitrary tickets), or IPSec (for any public key system, if
I can insert an untrustworthy CA certificate, it's all over), and so
on.

This is about as much of a tautology as shouting from the rooftops
that "the sky is blue" or "2+2=4".  If you find this to be an insight
worthy of note, it says much more about *you* than of the protocol or
anyone on this list...

As the old saying goes, "better to be silent, and thought to be a
fool, and to speak, and remove all doubt."

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Couldn't obtain random bytes in sshd - problem in RAND_poll?

2008-08-07 Thread Theodore Tso
On Thu, Aug 07, 2008 at 02:13:27AM -0700, David Schwartz wrote:
> If so, this doesn't say that /dev/urandom never blocks. It just says that it
> will not block waiting for more entropy. In fact, this paragraph is horribly
> misleading, because it suggests that the worst thing /dev/urandom can do is
> return random values with a theoretical vulnerability. Alas, this is not
> true. The /dev/urandom interface will happily return entirely predictable
> values if the pool was never seeded.

In practice, most Linux distributions (and certainly all of the major
ones), extract content from /dev/urandom and save it across boot
sessions.  So the once the system becomes seeded, it stays seeded.
Yes, if you assume that an attacker can remove the hard drive and
access the urandom seed file across reboots, you're in trouble -- but
the attacker can also replace the kernel with one that contains a
keyboard logger, so you'd be even more f*cked.

The question then is how do you make sure the system is correctly
initially seeded.  This fundamentally depends on the distribution
installer.  If there is sufficient keyboard/mouse activity while the
system is booted, and the distribution saves the result to urandom
seed file, then the system can be seeded from its first boot.

However, if you are worried about systems where you're booting off of
a CD, and/or where the system is getting installed without any human
intervention, then yes, there is need to be concerned.  The problem
gets worse if you are booting off of flash where there is entropy
gained by trying to get timings from any kind of spinning storage.  At
some level, security systems really need to be designed from a system
perspective, and so there is a quesiton how much of this can or should
be solved inside the kernel or inside the openssl library.  

The argument for doing something where you are worried about the
initial seeding problem is probably one where we want to make the
system more robust against clueless application programmers/system
engineers trying to put together security solutions.  I can think of
some potential solutions; for example, we can set a flag which is set
once root has written at least 256 bytes to /dev/random (or
/dev/urandom; they share the same input pool), OR once the input pool
has gained at least 64 bytes of entropy credits.  And then we could
have an ioctl which blocks until that that flag is set.

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Please help: very urgent: Query on patented algorithms

2008-06-18 Thread Theodore Tso
On Wed, Jun 18, 2008 at 01:44:42PM +0530, bagavathy raj wrote:
> 
> I have openssl dlls(i.e.libeay32.dll, ssleay32.dll). I need to know if these
> libraries are using any of the patented algorithms like IDEA, RC4, RC5,MDC2
> etc.Dependency walker helped me but I want to know if there is any other
> way.
> 
> Is there any binary distribution where I can find SSL dlls without patented
> algorithms like IDEA,MCD2,RC4,RC5 etc. I tried compiling
> without them. I could exclude other algos but not RC4. Some linking issues
> are there.
> So I need to know if there is any ssl release without the patented
> algorithms.

Please note, for MCD-2 and RC4:

   http://en.wikipedia.org/wiki/MDC-2#Patent_issues

   http://www.mail-archive.com/[EMAIL PROTECTED]/msg53265.html

Note also that usually it's not enough just to say "no patents"; do
you care about technology that is patented, but one for which the
patent is available in the public domain (e.g., such as was the case
with DES), or code licensed under a certain license (such as the case
with the RCU patent in the kernel), or if the code is used to
implement a certain protocol (as is the case with various patent
pledges out there)?  It's rare that someone really means "no patents
whatsoever."

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-21 Thread Theodore Tso
On Tue, May 20, 2008 at 10:43:27PM -0700, dean gaudet wrote:
> the so-called "uninitialized" data is actually from the stack right?  an 
> attacker generally controls that (i.e. earlier use of the stack probably 
> includes char buf[] which is controllable).  i don't know what ordering 
> the entropy is added to the PRNG, but if all the useful entropy goes in 
> first then an attacker might get to control the last 1KiB passed through 
> the SHA1.
> 
> yes it's unlikely given what we know today that an attacker could 
> manipulate the state down to a sufficiently small number of outputs, but i 
> really don't see the point of letting an attacker have that sort of 
> control.

If this is true, then all digital signatures, certificates, that use
SHA-1 would have to be discarded.  The PRNG will be the least of your
problems.  Consider that if I were to digitally sign this reply, I am
including in this message text I didn't write (namely, the text which
you are replying).  Or an attacker which gets to "control" network
packets which are sent out via integrity-protected IPSEC connections.
Crypto checksums have to be able to deal with this sort of thing, and
no, they're not affected.

Controlling the last megabyte of data passed through SHA1 wouldn't
matter; if you could, then you could induce hash collisions, and SHA-1
would be totally broken as an crypto checksum.

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-19 Thread Theodore Tso
On Mon, May 19, 2008 at 12:30:42PM -0400, Thor Lancelot Simon wrote:
> Thanks for the gratuitous insult.  I'd be perfectly happy with the case
> you'd be happy with, too, but you took my one bit and turned it into 256.

But your example is NOT what openssl does.

I recently had similar issue with Linux's /dev/random, where folks
were similarly confused.  You see, the issue that SHA-1 takes its
input in 512 bits chunks of data.  If you are only mixing in 256 bits
of randomness, the question is what to do with the other 256 bits.
You could waste CPU time zero'ing out those bits, or you can just
shrug your shoulders and say, "I don't care".  The problem is people
who don't understand say "OMG!!! Ur using unitializated data!", not
getting the fact that whole point was to mix in the 256 bits of
entropy.  The other uninitialized bits might add more information
unknown to the adversary (in which case it helps), or it might not (in
which case it doesn't matter, because what you're really depending on
is the 256 bits of entropy from a good entropy source).

Of course, you could use the argument that it's a bad idea because a
clueless Debian developer might comment out the call entirely due to
some misguided notion that using uninitialized data was somehow a
security problem.  Until last week, I would have thought that was a
silly argument, but apparently there are people that clueless/stupid/
careless out there

- Ted

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-19 Thread Theodore Tso
On Mon, May 19, 2008 at 11:47:07AM +0200, Bodo Moeller wrote:
> You are being a few orders of magnitude too optimistic here, though
> ... ;-)  A zettabyte would be 2^78 bits (less if you use the standard
> decimal version of "zetta"), but SHA-1 will only handle inputs up to
> 2^64 -1 bits.

That's true only because size of the message in bits is appended to
the end of the message to prevent the obvious extension attacks.  

(For people who are clueless about how SHA-1 is implemented, the
message is padded out with a 1 followed by enough 0 bits so the
message is congruent to 448 mod 512 bits, and then a 64-bit message
size count, in big-endian format is appended to the message, and the
message is processed in 16 byte --- 512 bit --- chunks.  Yes that
means that if the message is an exact multiple of 16 bytes, and you
know the size of the message, the last 16 bytes which is run through
the compression algorithm is known to the adversary; it will be a 1,
followed by 447 zero bits, followed by the message size as a 64-bit
big endian integer.  Whoop!  Whoop!  Danger Will Robinson!  Time to
run in circles and scream about something you know nothing about!!!
"It must be a NSA conspiracy!")

Seriously, it would be easy to change the padding scheme to accomodate
bigger messages, and in fact for a PRNG you don't need to worry about
extension attacks, since what you are really depending upon is the
crypto hash function's compression function.  With Linux's
/dev/random, we dispense with message padding completely, since it's
not necessary given how we are using the hash's compression function.

So it wouldn't be SHA-1, but with some very minor modifications to
accomodate a bigger message size count, you could run a zettabyte of
known data, followed by 256 bits of known data, and the adversary
would still be screwed.  (And if they aren't, it's time to replace
SHA-1)

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-18 Thread Theodore Tso
On Sun, May 18, 2008 at 05:24:51PM -0400, Thor Lancelot Simon wrote:
> So you're comfortable with the adversary knowing, let's say, 511 of
> the first 512 bits fed through SHA1?

*Sigh*.  

Thor, you clearly have no idea how SHA-1 works.  In fact, I'd be
comfortable with an adversary knowing the first megabyte of data fed
through SHA1, as long as it was followed up by at least 256 bits which
the adversary *didn't* know.

Look, SHA-1 works by having a Very Complicated Mixing function that
takes a state function, and mixes in the input in a one-way fashion,
in chunks of 64 bytes at a time.  The initial state looks like this:

67452301 efcdab89 98badcfe 10325476 c3d2e1f0

It doesn't look very random, but that's OK.  You have to start
*somewhere* --- and it's a public value.  If you mix in a megabyte of
known data, it is the equivalent of changing the initial state to
something else.  Effectively, it's another public starting value.  

As long as follow up the megabyte of known data with 256 bits of
unknown data, you could feed another megabyte of known data, and the
adversary would have no idea what the internal state of the SHA-1 hash
function would look like.  If this were not true, SHA-1's mixing
function would be so throughly broken that all use of SHA-1 for
digital signatures, certificates, etc., would be totally broken.

So if you don't trust SHA-1 for use in PRNG's, then you shouldn't
trust SHA-1 for *anything*.

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-15 Thread Theodore Tso
On Thu, May 15, 2008 at 11:09:46AM -0500, John Parker wrote:
> > change -DPURIFY to -DNO_UNINIT_DATA or something else which has a clearer
> > intention, so that debug packages (or even base packages that want to be
> > valgrind-friendly) have a straightforward mechanism to apply. Well, a
> > straightforward mechanism that doesn't kill the PRNG outright, I mean
> > (otherwise there is already a highly-publicised patch we could apply...)
> 
> What I was hoping for was a -DNO_UNINIT_DATA that wouldn't be the
> default, but wouldn't reduce the keyspace either.

-DPURIFY *does* do what you want.  It doesn't reduce the keyspace.

The problem was that what Debian did went far beyond -DPURIFY.  The
Debian developer in question disabled one call that used uninitialized
memory, but then later on, removed another similar call that looked
the same, but in fact *was* using initialized data --- said
initialized data being real randomness critically necessary for
security.

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: OpenSSL and LSB

2008-03-17 Thread Theodore Tso
On Sun, Mar 16, 2008 at 09:21:05PM -0700, Michael Sierchio wrote:
> It is *so* difficult to critique something without seeming to
> criticize the work of others, so the following disclaimer applies.
> MUCH is owed to the developers and maintainers of OpenSSL --
> Mark, Ralf, Stephen, Ben, Lutz, Nils, Richard, Bodo, Ulf, Andy,
> Geoff -- and a host of others.  OpenSSL is ubiquitous, thanks in
> large part to them.  108 bows to each of you.

Indeed; for myself, I was trying very hard *not* to critique, or at
least voice any value judgements, but instead to try to describe the
current status of the codebase.  Having once been a steward of such a
codebase (MIT Kerberos v5, from circa 1993 to 1999), and I know how
much work it is to maintain backwards compatibility, even at an API
level, while worrying about trying to make it more maintainable going
forward.  We were also under-resourced for most of my time there ---
sound familiar?  :-)

> If one were really serious, it calls for a rewrite -- one that replaces
> the dreadful BIO-stuff, develops a strictly modular separation of crypto
> libraries (which are used so many places other than for SSL/TLS), etc.
> and is written in C++.

Well, there *are* other SSL libraries out there, and in fact some
people have suggested that the LSB standardize one of the alternate
libraries.  But, all aside from the licensing issues, the fact of the
matter is that for better or for worse, the vast majority of the
applications out there are using OpenSSL; it *is* the defacto
standard.  A rewrite wouldn't help all of the exsting applications
which are using the current API.  And where are we going to find the
resources to do a rewrite?

I have talked to one a developer from one ISV (which I won't name
pending getting permission from him that it's OK) that the ABI
situation has gotten painful enough that he was thinking about
creating a wrapper library that would provide a stable ABI, and push
the distro's to ship that so that his (fairly widely downloaded and
used) binary application could link against a stable library that he
knew was guaranteed to work across multiple Linux distro's.  So this
is not just an theoretical problem, or something which is
LSB-specific, but it is something that is affecting OpenSSL client
applications.

So the question is what's the lowest cost method, which is still
maintainable in the long run, for providing this compatibility?

I would suggest that the best way to do this is to *add* new mutator
functions (and accessor functions, where necessary) which applications
who care about ABI stability can use, and then document a set of
interfaces for which ABI stability is guaranteed.  That could be a
relatively small set initially --- enough for applications that aren't
doing anything strange, and then this list can gradually be expanded
out, with some new ABI stable functions getting added as necessary.

Does that make sense?

- Ted
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]