Re: a record?

2005-11-18 Thread Eric Rescorla

Matthew Sullivan <[EMAIL PROTECTED]> writes:

> John Levine wrote:
>
Moving sshd from port 22 to port 137, 138 or 139. Nasty eh?

>>>don't do that! Lots of (access) isps around the world (esp here in
>>>Europe) block those ports
>>>
>>
>>If you're going to move sshd somewhere else, port 443 is a fine
>>choice.  Rarely blocked, rarely probed by ssh kiddies.  It's probed
>>all the time by malicious web spiders, but since you're not a web
>>server, you don't care.
>>
>
> Except if you're running a version of OpenSSL that has a
> vulnerability, you could be inviting trouble - particularly with
> kiddies scanning for Apache with vulnerable versions of OpenSSL
> attached by way of mod_ssl etc...

It's worth noting that while OpenSSH uses OpenSSL for crypto, most of
the recent vulnerabilities in OpenSSL do not extend to OpenSSH,
because they're in the SSL state machine, not the crypto.

-Ekr


Re: Cisco IOS Exploit Cover Up

2005-07-28 Thread Eric Rescorla

James Baldwin <[EMAIL PROTECTED]> writes:

> On Jul 28, 2005, at 3:29 AM, Neil J. McRae wrote:
>
>
>> I couldn't disagree more. Cisco are trying to control the
>> situation as best they can so that they can deploy the needed
>> fixes before the $scriptkiddies start having their fun. Its
>> no different to how any other vendor handles a exploit and
>> I'm surprised to see network operators having such an attitude.
>>
>
> That's part of the issue: this wasn't an exploit in the sense of
> something a $scriptkiddie could exploit. The sheer technical
> requirements of the exploit itself ensure that it will only be
> reproduced by a small number of people across the globe. There was no
> source or proof of concept code released and duplicating the
> information would only provide you a method to increase the severity
> of other potential exploits. It does not create any new exploits.
> Moreover, the fix for this was already released and you have not been
> able to download a vulnerable version of the software for months
> however there was no indication from Cisco regarding the severity of
> the required upgrade. That is to say, they knew in April that
> arbitrary code execution was possible on routers, they had it fixed
> by May, and we're hearing about it now and if Cisco had its way we
> might still not be hearing about it.

Can you or someone else who was there or has some details describe
what the actual result is and what the fix was? Based on what I've
been reading, it sounds like Lynn's result was a method for exploiting
arbitrary new vulnerabilities. Are you saying that this method can't
be used in future IOS revs? 

Thanks,
-Ekr

[Eric Rescorla  RTFM, Inc.]


Re: OMB: IPv6 by June 2008

2005-07-07 Thread Eric Rescorla

I don't want to get into an SSL vs. IPsec argument, but...

David Conrad <[EMAIL PROTECTED]> writes:
>> Compare with SSL (works out-of-the-box in 99.999% cases,
>> and allows both, full and hard security with root certificates etc, or
>> simple security based on _ok, I trust you first time, then we can
>> work_.
>
> a) I suspect most SSL implementations derive out of the same code base.

I'd be surprised if this is correct. The three major SSL/TLS 
implementations by deployment are:

1. OpenSSL (used in Apache2, ApacheSSL, and mod_ssl)
2. Microsoft (used in IE and IIS)
3. Firefox/Mozilla (based on Netscape's NSS).

These are all genetically distinct. In addition, there are at least
three independent Java implementations (JSSE, PureTLS, SSLava).
In addition, Terisa Systems (now Spyrus) independently implemented
SSLv3 (though our v2 stack had some of Netscape's SSLref stack)
and I believe that Consensus development did so as well.

-Ekr


Re: OT - 3 Free Gmail invites

2004-08-19 Thread Eric Rescorla

Bill Woodcock <[EMAIL PROTECTED]> writes:

>   On Thu, 19 Aug 2004, Steven S. wrote:
> > I have 5 invites that I'm willing to part with...
> 
> Uh, could we _please_ get back to something with operational content, or
> nothing at all?
> 
> Anyone have anything concrete on the SHA-0 / MD5 compromise, for instance?
> Any operational impact there, that we need to worry about in the near
> term?

Here's the overview I sent to IAB/IESG:

As you may or may not have heard, this year's CRYPTO conference
has been very interesting:

* Joux has found a single collision in SHA-0--an algorithm that nobody
  uses but that is very similar to SHA-1. However, SHA-0 was changed to
  fix a flaw (later found by Joux), thus becoming SHA-1 so we can hope
  that this attack can't be extended to SHA-1. The attack was fairly
  expensive, requiring about 2^51 operations the brute force attack
  would take about 2^80).

* Biham and Chen can find collisions in a reduced round version of SHA-1
  (40 rounds). The full SHA-1 is 80 rounds. It's hard to know whether
  this can be extended to full SHA-1 or not. NSA (who designed SHA-1)
  seems to be generally pretty good at tuning their algorithms so that
  they're just complicated enough to be secure.

* Weng, Fang, Lai, and Yu have what appears to be a general method for
  finding collisions in MD4, MD5, HAVAL-128, and RIPEMD. They
  haven't published any details.

What does this mean for us? I'll be writing up full details hopefully
soon, but here's a short overview...

WHAT'S BEEN SHOWN?
An attacker can generate two messages M and M' such that Hash(M) = Hash(M').
Note that he cannot (currently) generate a message M such that Hash(M)
is a given hash value, nor can he generate a message M' such that it hashes
the same as a fixed message M. Currently this is possible for MD5 
but we have to consider the possibility that it will be eventually
possible for SHA-1.


USES OF HASH FUNCTIONS
We use hash algorithms in a bunch of different contexts. At minimum:

1. Digital signatures (you sign the hash of a message).
   (a) On messages (e.g. S/MIME). 
   (b) On certificates.
   (c) In authentication primitives (e.g., SSH)
2. As MAC functions (e.g. HMAC)
3. As authentication functions (e.g. CRAM-MD5)
4. As key generation functions (e.g. SSL or IPsec PRF)

THE POTENTIAL ATTACKS
The only situation in which the current attacks definitely apply is 
(1). The general problem is illustrated by the following scenario.
Alice and Bob are negotiating a contract. Alice generates two
messages:

M  = "Alice will pay Bob $500/hr"
M' = "Alice will pay Bob $50/hr" [0]

Where H(M) = H(M').

She gets Bob to sign M (and maybe signs it herself). Then when it
comes time to pay Bob, she whips out M' and says "I only owe
$50/hr", which Bob has also signed (remember that you sign the
hash of the message). 

So, this attack threatens non-repudiation or any kind of third
party verifiability. Another, slightly more esoteric, case is
certificates. Remember that a certificate is a signed message
from the CA containing the identity of the user. So, Alice
generates two certificate requests:

R  = "Alice.com, Key=X"
R' = "Bob.com, Key=Y"

Such that H(R) = H(R') (I'm simplifying here). 

When the CA signs R, it's also signing R', so Alice can present
her new "Bob" certificate and pose as Bob. It's not clear that
this attack can work in practice because Alice doesn't control
the entire cert: the CA specifies the serial number. However,
it's getting risky to sign certs with MD5.


WHAT'S SAFE?
First, anything that's already been signed is definitely safe.  If you
stop using MD5 today, nothing you signed already puts you at risk.

There is probably no risk to two party SSH/SSL-style authentication
handshakes.

It's believed that HMAC is secure against this attack (according to Hugo
Krawczyk, the designer) so the modern MAC functions should all be
secure.

I worry a bit about CRAM-MD5 and HTTP Digest. They're not as well
designed as HMAC and you might potentially be able to compromise them to
mount some kind of active cut-and-paste attack, though I don't have one
in my pocket.

The key generation PRFs should be safe.

-Ekr


[0] In practice, the messages might not be this similar, but there
turn out to be lots of opportunities to make subtle changes in any
text message.



Re: AV/FW Adoption Sudies

2004-06-10 Thread Eric Rescorla

[EMAIL PROTECTED] writes:

> On Thu, 10 Jun 2004 13:30:41 PDT, Eric Rescorla said:
>
>> [0] Note that this doesn't require that the chance of finding
>> any particular bug upon inspection of the code be very low
>> high, but merely that there not be very deep coverage of
>> any particular code section.
>
> Right.  However, if you hand the team of white hats and the team of
> black hats the same "Chatter has it there's a 0-day in Apache's
> mod_foo handler"

Ok, now we're getting somewhere.

I'm asking the question:
If you find some bug in the normal course of your operations
(i.e. nobody told you where to look) how likely is it that
someone else has already found it?

And you're asking a question more like:
Given that you hear about a bug before its release, how likely
is it that some black hat alredy knows?

I think that the answer to the first question is probably
"fairly low". I agree that the answer to the second question is
probably "reasonably high".

-Ekr





Re: AV/FW Adoption Sudies

2004-06-10 Thread Eric Rescorla

[EMAIL PROTECTED] writes:

> On Thu, 10 Jun 2004 12:23:42 PDT, Eric Rescorla said:
>
>> I'm not sure we disagree. All I was saying was that I don't
>> think we have a good reason to believe that the average bug
>> found independently by a white hat is already known to a
>> black hat. Do you disagree?
>
> Actually, yes.
>
> Non-obvious bugs (ones with a non 100% chance of being spotted on
> careful examination) will often be found by both groups.  Let's say
> we have a bug that has a 0.5% chance of being found at any given
> attempt to find it.  Now take 100 white hats and 100 black hats -
> compute the likelyhood that at least 1 attempt in either group finds
> it (I figure it as some 39% (1 - (0.995^100)).  For bonus points,
> extend a bit further, and make multiple series of attempts, and
> compute the probability that for any given pair of 100 attempts,
> exactly one finds it, or neither finds it, or both find it.  And it
> turns out that for that 39% chance, 16% of the time both groups will
> find it, 36% of the time exactly one will find it, and 48% of the
> time *neither* will find it.

The problem with this a priori analysis is that it predicts an
incredibly high probability that any given bug will be found by white
hats. However, in practice, we know that bugs persist for years
without being found, so we know that that probability as a function of
time must actually be quite low. Otherwise, we wouldn't see the data
we actually see, which is a more or less constant stream of bugs
at a steady rate. 

On the other hand, if the probability that a given bug will be
found is low [0], then the chance that when you find a bug it
will also be found by someone else is correspondingly low.
 
-Ekr


[0] Note that this doesn't require that the chance of finding
any particular bug upon inspection of the code be very low
high, but merely that there not be very deep coverage of
any particular code section.


Re: AV/FW Adoption Sudies

2004-06-10 Thread Eric Rescorla

[EMAIL PROTECTED] writes:

> On Thu, 10 Jun 2004 11:54:31 PDT, Eric Rescorla said:
>
>> My hypothesis is that the sets of bugs independently found by white
>> hats and black hats are basically disjoint. So, you'd definitely
>> expect that there were bugs found by the black hats and then used as
>> zero-days and eventually leaked to the white hats. So, what you
>> describe above is pretty much what one would expect.
>
> Well.. for THAT scenario to happen, two things have to be true:
>
> 1) Black hats are able to find bugs too
>
> 2) The white hats aren't as good at finding bugs as we might think,
> because some of their finds are leaked 0-days rather than their own work,
> inflating their numbers.

Both of these seem fairly likely to me. I've certainly seen
white hat bug reports that are clearly from leaks (i.e. where
they acknowledge that openly).

> Remember what you said:
>
>> relatively small. If we assume that the black hats aren't vastly more
>> capable than the white hats, then it seems reasonable to believe that
>> the probability of the black hats having found any particular
>> vulnerability is also relatively small.
>
> More likely, the software actually leaks like a sieve, and NEITHER group
> has even scratched the surface..

That's more or less what I believe the situation to be, yes.

I'm not sure we disagree. All I was saying was that I don't
think we have a good reason to believe that the average bug
found independently by a white hat is already known to a
black hat. Do you disagree?

-Ekr


Re: AV/FW Adoption Sudies

2004-06-10 Thread Eric Rescorla

Paul G <[EMAIL PROTECTED]> wrote:

> - Original Message - 
> From: "Eric Rescorla" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Cc: "Sean Donelan" <[EMAIL PROTECTED]>; "'Nanog'" <[EMAIL PROTECTED]>
> Sent: Thursday, June 10, 2004 2:37 PM
> Subject: Re: AV/FW Adoption Sudies
> 
> -- snip ---
> 
> > If we assume that the black hats aren't vastly more
> > capable than the white hats, then it seems reasonable to believe that
> > the probability of the black hats having found any particular
> > vulnerability is also relatively small.
> 
> and yet, some of the most damaging vulns were kept secret for months before
> they got leaked and published. i won't pretend to have the answer, but fact
> remains fact.

I don't think that this contradicts what I was saying.

My hypothesis is that the sets of bugs independently found by white
hats and black hats are basically disjoint. So, you'd definitely
expect that there were bugs found by the black hats and then used as
zero-days and eventually leaked to the white hats. So, what you
describe above is pretty much what one would expect.

-Ekr


Re: AV/FW Adoption Sudies

2004-06-10 Thread Eric Rescorla

[EMAIL PROTECTED] writes:

> On Thu, 10 Jun 2004 08:50:18 PDT, Eric Rescorla said:
>> [EMAIL PROTECTED] writes:
>
>> > Remember that the black hats almost certainly had 0-days for the
>> > holes, and before the patch comes out, the 0-day is 100% effective.
>> 
>> What makes you think that black hats already know about your
>> average hole?
>
> Because unlike a role playing game, in the real world the lawful-good white
> hats don't have any deity-granted magic ability to spot holes that remain
> hidden from the chaotic-neutral/evil dark hats.
>
> Explain to me why, given that MS03-039, MS03-041, MS03-043,
> MS03-044, and MS03-045 all affected systems going all the way back
> to NT/4, and that exploits surfaced quite quickly for all of them,
> there is *any* reason to think that only white hats who have been
> sprinkled with magic pixie dust were able to find any of those holes
> in all the intervening years?

Actually, I think that the persistence of vulnerabilities is an
argument against the theory that the black hats in general know about
vulnerabilities before they're released.  I.e. given that the white
hats put a substantial amount of effort into finding vulnerabilities
and yet many vulnerabilities persist in software for a long period of
time without being found and disclosed that suggests that the
probability of white hats finding any particular vulnerability is
relatively small. If we assume that the black hats aren't vastly more
capable than the white hats, then it seems reasonable to believe that
the probability of the black hats having found any particular
vulnerability is also relatively small.

For more detail on this general line of argument, see my paper 
"Is finding security holes a good idea?" at WEIS '04.

Paper:   http://www.dtc.umn.edu/weis2004/rescorla.pdf
Slides:  http://www.dtc.umn.edu/weis2004/weis-rescorla.pdf

WRT to the relatively rapid appearance of exploits, I don't think
that's much of a signal one way or the other. As I understand it, once
one knows about a vulnerability it's often (though not always) quite
easy to write an exploit. And as you observe, the value of an
exploit is highest before people have had time to patch.
   
-Ekr





Re: AV/FW Adoption Sudies

2004-06-10 Thread Eric Rescorla

[EMAIL PROTECTED] writes:
> On Wed, 09 Jun 2004 18:45:55 EDT, Sean Donelan <[EMAIL PROTECTED]>  said:
>
>> The numbers vary a little e.g. 38% or 42%, but the speed or severity or
>> publicity doesn't change them much.  If it is six months before the
>> exploit, about 40% will be patched (60% unpatched).  If it is 2 weeks,
>> about 40% will be patched (60% unpatched).  Its a strange "invisible hand"
>> effect, as the exploits show up sooner the people who were going to patch
>> anyway patch sooner.  The ones that don't, still don't.
>
> Remember that the black hats almost certainly had 0-days for the
> holes, and before the patch comes out, the 0-day is 100% effective.

What makes you think that black hats already know about your
average hole?


> Once the patch comes out and is widely deployed, the usefulness of
> the 0-day drops.
>
> Most probably, 40% is a common value for "I might as well release
> this one and get some recognition".  After that point, the residual
> value starts dropping quickly.

I don't think this assessment is likely to be correct. If you look, for
instance, at the patching curve on page 1 of "Security holes... Who
cares?" (http://www.rtfm.com/upgrade.pdf) theres'a pretty clear flat
spot from about 25 days (roughly 60% patch adoption) to 45 days
(release of the Slapper worm). So, one that 2-3 week initial
period has passed, the value of an exploit is roughly constant
for a long period of time.

-Ekr


Re: OpenSSL

2003-03-18 Thread Eric Rescorla

[EMAIL PROTECTED] writes:

> > > This means that it is safer for senior managers in a company to 
> > > communicate using private ADSL Internet connections to their desktops 
> > > rather than using a corporate LAN.
> >
> > Afraid not. The timing attack is an attack on the SSL server. 
> > So as long as the SSL server is accessible at all, the attack
> > can be mounted. And once the private key is recovered, then
> > you no longer need LAN access.
> 
> While the timing attack is the attack against the SSL server, it is my
> reading of the paper that the attacks' success largely depends on ability to
> tightly control the time it takes to communicate with a service using SSL.
> Currently, such control is rather difficult to achive on links other than
> ethernet.
Quite so. What I meant here was that as long as Ethernet access
is provided to the server at all, having your own traffic sent
over a non-Ethernet link doesn't protect you.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/


Re: OpenSSL

2003-03-18 Thread Eric Rescorla

[EMAIL PROTECTED] writes:

> > This is a new attack, not the one Schneier was talking about.  It's 
> > very elegant work -- they actually implemented an attack that can 
> > recover the long-term private key.  The only caveat is that their 
> > attack currently works on LANs, not WANs, because they need more 
> > precise timing than is generally feasible over the Internet.
> 
> Hmmm...
> This means that it is safer for senior managers in a company to 
> communicate using private ADSL Internet connections to their desktops 
> rather than using a corporate LAN.
Afraid not. The timing attack is an attack on the SSL server. 
So as long as the SSL server is accessible at all, the attack
can be mounted. And once the private key is recovered, then
you no longer need LAN access.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/


Re: SSL crack in the news

2003-02-22 Thread Eric Rescorla

"Mark Radabaugh" <[EMAIL PROTECTED]> writes:

> http://www.cnn.com/2003/TECH/internet/02/21/email.encryption.reut/index.html
> 
> Very little real information...
Here's the writeup I sent to the cryptography mailing list.

--

Here's a fairly detailed description of how the attack works.

You can also find a writeup by the authors at:
http://lasecwww.epfl.ch/memo_ssl.shtml


EXECUTIVE SUMMARY
This is a potentially serious attack for automated systems that rely on
passwords. Effectively, it potentially allows the attacker to recover an
encrypted password. It doesn't appear to be serious for non-automated
situations.


OVERVIEW
Imagine you have some protocol that uses passwords and the password
always falls in the same place in the message stream.  (IMAP, for
instance). What happens is that the attacker observes a connection using
that password. He then deliberately introduces a bogus message into the
encrypted traffic and observes the server's behavior. This allows him to
determine whether a single byte of the plaintext is a certain (chosen)
value. By iterating over the various bytes in the plaintext he can then
determine the entire plaintext.


HOW THE ATTACK WORKS
Basically, the attack exploits CBC padding. Recall that the standard
CBC padding is to pad with the length of the pad.  So, if you have an
8 byte block cipher and the data is 5 bytes long you'd have XX XX XX
XX XX 03 03 03. This technique allows you to remove the pad by
examining the final byte and then removing that many bytes. [0]
Typically you also check that all the pad values are correct.

Say the attacker intercepts a pair of consecutive blocks A and B,
which are:
A = AA AA AA AA AA AA AA a
B = BB BB BB BB BB BB BB b

And the plaintext of B corresponds to
P = PP PP PP PP PP PP PP p

The attacker wants to attack B and guesses that p == y. He transmits
a new message A' || B where
A' = AA AA AA AA AA AA AA (a XOR y)

When the server decrypts B, the final byte will be (p xor y).
If the attacker has guessed correctly then there will appear
to be a zero length pad and MAC verification proceeds. Since
the packet has been damaged, the MAC check fails. If the
attacker has guessed wrong then the last byte will be nonzero
and the padding check will fail. 

Many TLS implementations generated different errors for these
two cases. This allows the attacker to discover which type
of error has occurred and thus verify his guess. Unfortunately,
since this generates an error, the attacker can only guess once.

The attack described above was discovered a year or two ago
and implementations were quickly modified to generate the same
error for both cases.

This attack introduces two new features:
(1) The authors observed that if you were moving passwords over
TLS then the password would generally appear in the same place
in each protocol exchange. This lets you iterate the attack 
over multiple character guesses and eventually over multiple
positions.

(2) Even if the same error message generated the MAC check takes
time. You can therefore use timing analysis to determine what
kind of error occurred. This has the downside that there's some
noise but if you take enough samples you can factor that out.


PRACTICALITY
Let's estimate how long this will take. Most passwords are 7-bit ASCII,
so naively we need to try all 128 values for each character. On average
we'll get a hit halfway through our search so we expect have to try
about 64 values for each character position. If we assume an 8 character
password this means we'll need about 8*64==512 trials. Now, you
could be smarter about what characters are probable and reduce these
numbers somewhat but this is the right order of magnitude.

Now, things are complicated a bit by the fact that each trial creates an
error on both client and server. This has two implications. First, it
creates a really obvious signature.  Second, it requires that the client
create a lot of SSL connections for the attacker to work with. This is
only likely if the client is automated.



REQUIRED CONDITIONS
So, under what conditions can this attack be successful?

(1) The protocol must have the same data appearing in a 
predictable place in the protocol stream. In practice,
this limits its usefulness to recovering passwords.
However, this condition pretty common for things like
HTTP, IMAP, POP, etc.

(2) The SSL implementations must negotiate a block cipher.  Many SSL
implementations choose RC4 by default and RC4 is not vulnerable
to this attack.

(3) The attacker needs to be able to hijack or intercept TCP
connections. There are tools to do this that require varying
degrees of sophistication and access to use.

(4) The client must be willing to initiate a lot of connections
even if it's getting SSL errors. As a consequence it almost
certainly needs to be an automated client.

(5) The attacker and the server must h