Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Dan Kaminsky writes:

Uh, you *really* have no idea how much the black hat community is
looking forward to TCPA.  For example, Office is going to have core
components running inside a protected environment totally immune to
antivirus.



How? TCPA is only a cryptographic device, and some BIOS code, nothing
else. Does the coming of TCPA chips eliminate the bugs, buffer overflows,
stack overflows, or any other way to execute arbitrary code? If yes, isn't
that a wonderful thing? Obviously it doesn't (eliminate bugs and so on).

  

TCPA eliminates external checks and balances, such as antivirus.  As the 
user, I'm not trusted to audit operations within a TCPA-established 
sandbox.  Antivirus is essentially a user system auditing tool, and 
TCPA-based systems have these big black boxes AV isn't allowed to analyze.

Imagine a sandbox that parses input code signed to an API-derivable 
public key.  Imagine an exploit encrypted to that.  Can AV decrypt the 
payload and prevent execution?  No, of course not.  Only the TCPA 
sandbox can.  But since AV can't get inside of the TCPA sandbox, 
whatever content is protected in there is quite conspicuously unprotected.

It's a little like having a serial killer in San Quentin.  You feel 
really safe until you realize...uh, he's your cellmate.

I don't know how clear I can say this, your threat model is broken, and 
the bad guys can't stop laughing about it.


I have no idea whether or not the bad guys are laughing about it, but 
if they are, I agree with them -- I'm very afriad that this chip will 
make matters worse, not better.  With one exception -- preventing the 
theft of very sensitive user-owned private keys -- I don't think that 
the TCPA chip is solving the right problems.  *Maybe* it will solve the 
problems of a future operating system architecture; on today's systems, 
it doesn't help, and probably makes matters worse.

TCPA is a way to raise the walls between programs executing in 
different protection spaces.  So far, so good.  Now -- tell me the last 
time you saw an OS flaw that directly exploited flaws in conventional 
memory protection or process isolation?  They're *very* rare.

The problems we see are code bugs and architectural failures.  A buffer 
overflow in a Web browser still compromises the browser; if the 
now-evil browser is capable of writing files, registry entries, etc., 
the user's machine is still capable of being turned into a spam engine, 
etc.  Sure, in some new OS there might be restrictions on what such an 
application can do, but you can implement those restrictions with 
today's hardware.  Again, the problem is in the OS architecture, not in 
the limitations of its hardware isolation.

I can certainly imagine an operating system that does a much better job 
of isolating processes.  (In fact, I've worked on such things; if 
you're interested, see my papers on sub-operating systems and separate 
IP addresses per process group.)  But I don't see that TCPA chips add 
much over today's memory management architectures.  Furthermore, as Dan 
points out, it may make things worse -- the safety of the OS depends on 
the userland/kernel interface, which in turn is heavily dependent on 
the complexity of the privileged kernel modules.  If you put too much 
complex code in your kernel -- and from the talks I've heard this is 
exactly what Microsoft is planning -- it's not going to help the 
situation at all.  Indeed, as Dan points out, it may make matters worse.

Microsoft's current secure coding initiative is a good idea, and from 
what I've seen they're doing a good job of it.  In 5 years, I wouldn't 
be at all surprised if the rate of simple bugs -- the buffer overflows, 
format string errors, race conditions, etc. -- was much lower in 
Windows and Office than in competing open source products.  (I would 
add that this gain has come at a *very* high monetary cost -- training, 
code reviews, etc., aren't cheap.)  The remaining danger -- and it's a 
big one -- is the architecture flaws, where ease of use and 
functionality often lead to danger.  Getting this right -- getting it 
easy to use *and* secure -- is the real challenge.  Nor are competing 
products immune; the drive to make KDE and Gnome (and for that matter 
MacOS X) as easy to use (well, easier to use) than Windows is likely to 
lead to the same downward security sprial.

I'm ranting, and this is going off-topic.  My bottom line: does this 
chip solve real problems that aren't solvable with today's technology?  
Other than protecting keys -- and, of course, DRM -- I'm very far from 
convinced of it.  The fault, dear Brutus, is not in our stars but in 
ourselves.

--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb




Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Dan Kaminsky writes:

Uh, you *really* have no idea how much the black hat community is
looking forward to TCPA.  For example, Office is going to have core
components running inside a protected environment totally immune to
antivirus.



How? TCPA is only a cryptographic device, and some BIOS code, nothing
else. Does the coming of TCPA chips eliminate the bugs, buffer overflows,
stack overflows, or any other way to execute arbitrary code? If yes, isn't
that a wonderful thing? Obviously it doesn't (eliminate bugs and so on).

  

TCPA eliminates external checks and balances, such as antivirus.  As the 
user, I'm not trusted to audit operations within a TCPA-established 
sandbox.  Antivirus is essentially a user system auditing tool, and 
TCPA-based systems have these big black boxes AV isn't allowed to analyze.

Imagine a sandbox that parses input code signed to an API-derivable 
public key.  Imagine an exploit encrypted to that.  Can AV decrypt the 
payload and prevent execution?  No, of course not.  Only the TCPA 
sandbox can.  But since AV can't get inside of the TCPA sandbox, 
whatever content is protected in there is quite conspicuously unprotected.

It's a little like having a serial killer in San Quentin.  You feel 
really safe until you realize...uh, he's your cellmate.

I don't know how clear I can say this, your threat model is broken, and 
the bad guys can't stop laughing about it.


I have no idea whether or not the bad guys are laughing about it, but 
if they are, I agree with them -- I'm very afriad that this chip will 
make matters worse, not better.  With one exception -- preventing the 
theft of very sensitive user-owned private keys -- I don't think that 
the TCPA chip is solving the right problems.  *Maybe* it will solve the 
problems of a future operating system architecture; on today's systems, 
it doesn't help, and probably makes matters worse.

TCPA is a way to raise the walls between programs executing in 
different protection spaces.  So far, so good.  Now -- tell me the last 
time you saw an OS flaw that directly exploited flaws in conventional 
memory protection or process isolation?  They're *very* rare.

The problems we see are code bugs and architectural failures.  A buffer 
overflow in a Web browser still compromises the browser; if the 
now-evil browser is capable of writing files, registry entries, etc., 
the user's machine is still capable of being turned into a spam engine, 
etc.  Sure, in some new OS there might be restrictions on what such an 
application can do, but you can implement those restrictions with 
today's hardware.  Again, the problem is in the OS architecture, not in 
the limitations of its hardware isolation.

I can certainly imagine an operating system that does a much better job 
of isolating processes.  (In fact, I've worked on such things; if 
you're interested, see my papers on sub-operating systems and separate 
IP addresses per process group.)  But I don't see that TCPA chips add 
much over today's memory management architectures.  Furthermore, as Dan 
points out, it may make things worse -- the safety of the OS depends on 
the userland/kernel interface, which in turn is heavily dependent on 
the complexity of the privileged kernel modules.  If you put too much 
complex code in your kernel -- and from the talks I've heard this is 
exactly what Microsoft is planning -- it's not going to help the 
situation at all.  Indeed, as Dan points out, it may make matters worse.

Microsoft's current secure coding initiative is a good idea, and from 
what I've seen they're doing a good job of it.  In 5 years, I wouldn't 
be at all surprised if the rate of simple bugs -- the buffer overflows, 
format string errors, race conditions, etc. -- was much lower in 
Windows and Office than in competing open source products.  (I would 
add that this gain has come at a *very* high monetary cost -- training, 
code reviews, etc., aren't cheap.)  The remaining danger -- and it's a 
big one -- is the architecture flaws, where ease of use and 
functionality often lead to danger.  Getting this right -- getting it 
easy to use *and* secure -- is the real challenge.  Nor are competing 
products immune; the drive to make KDE and Gnome (and for that matter 
MacOS X) as easy to use (well, easier to use) than Windows is likely to 
lead to the same downward security sprial.

I'm ranting, and this is going off-topic.  My bottom line: does this 
chip solve real problems that aren't solvable with today's technology?  
Other than protecting keys -- and, of course, DRM -- I'm very far from 
convinced of it.  The fault, dear Brutus, is not in our stars but in 
ourselves.

--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb




Re: The Reader of Gentlemen's Mail, by David Kahn

2005-01-09 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Bill Stewart writ
es:
My wife was channel-surfing and ran across David Kahn talking about his 
recent book
The Reader of Gentlemen's Mail: Herbert O. Yardley and the Birth of 
American Codebreaking.

ISBN 0300098464 , Yale University Press, March 2004

Amazon's page has a couple of good detailed reviews
http://www.amazon.com/exec/obidos/ASIN/0300098464/qid=1105254301/sr=2-1/ref=pd
_ka_b_2_1/102-1630364-0272149


I have the book.  For the student of the history of cryptography, it's 
worth reading.  For the less dedicated, it's less worthwhile.  It's not 
The Codebreakers; it's not The Code Book; other than the title 
quote (and I assume most readers of this list know the story behind 
it), there are no major historical insights.

The most important insight, other than Yardley's personality, is what 
he was and wasn't as a cryptanalyst.  The capsule summary is that he 
was *not* a cryptanalytic superstar.  In that, he was in no way a peer 
of or a competitor to Friedman.  His primary ability was as a manager 
and entrepreneur -- he could sell the notion of a Black Chamber (with 
the notorious exception of his failure with Stimson), and he could 
recruit good (but not always great) people.  But he never adapted 
technically.  His forte was codes -- he know how to create them and how 
to crack them.  But the world's cryptanalytic services were also 
learning how to crack them with great regularity; that, as much as 
greater ease of use, was behind the widespread adoption of machine 
cryptography (Enigma, M-209, Typex, Purple, etc.) during the interwar
period.  Yardley never adapted and hence he (and his organizations) 
became technologically obsolete.

One of the reviews on Amazon.com noted skeptically Kahn's claim that 
Friedman was jealous of Yardley's success with women.  I have no idea 
if that's true, though moralistic revulsion may be closer.  But I 
wonder if the root of the personal antagonism may be more that of the 
technocrat for the manager...

--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb




Re: The Reader of Gentlemen's Mail, by David Kahn

2005-01-09 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Bill Stewart writ
es:
My wife was channel-surfing and ran across David Kahn talking about his 
recent book
The Reader of Gentlemen's Mail: Herbert O. Yardley and the Birth of 
American Codebreaking.

ISBN 0300098464 , Yale University Press, March 2004

Amazon's page has a couple of good detailed reviews
http://www.amazon.com/exec/obidos/ASIN/0300098464/qid=1105254301/sr=2-1/ref=pd
_ka_b_2_1/102-1630364-0272149


I have the book.  For the student of the history of cryptography, it's 
worth reading.  For the less dedicated, it's less worthwhile.  It's not 
The Codebreakers; it's not The Code Book; other than the title 
quote (and I assume most readers of this list know the story behind 
it), there are no major historical insights.

The most important insight, other than Yardley's personality, is what 
he was and wasn't as a cryptanalyst.  The capsule summary is that he 
was *not* a cryptanalytic superstar.  In that, he was in no way a peer 
of or a competitor to Friedman.  His primary ability was as a manager 
and entrepreneur -- he could sell the notion of a Black Chamber (with 
the notorious exception of his failure with Stimson), and he could 
recruit good (but not always great) people.  But he never adapted 
technically.  His forte was codes -- he know how to create them and how 
to crack them.  But the world's cryptanalytic services were also 
learning how to crack them with great regularity; that, as much as 
greater ease of use, was behind the widespread adoption of machine 
cryptography (Enigma, M-209, Typex, Purple, etc.) during the interwar
period.  Yardley never adapted and hence he (and his organizations) 
became technologically obsolete.

One of the reviews on Amazon.com noted skeptically Kahn's claim that 
Friedman was jealous of Yardley's success with women.  I have no idea 
if that's true, though moralistic revulsion may be closer.  But I 
wonder if the root of the personal antagonism may be more that of the 
technocrat for the manager...

--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb




Re: Attacking networks using DHCP, DNS - probably kills DNSSEC

2003-06-29 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Simon Josefsson writes:


Of course, everything fails if you ALSO get your DNSSEC root key from
the DHCP server, but in this case you shouldn't expect to be secure.
I wouldn't be surprised if some people suggest pushing the DNSSEC root
key via DHCP though, because alas, getting the right key into the
laptop in the first place is a difficult problem.


I can pretty much guarantee that the IETF will never standardize that, 
except possibly in conjunction with authenticated dhcp.

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)



Re: Attacking networks using DHCP, DNS - probably kills DNSSEC

2003-06-28 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Bill Stewart writes:
Somebody did an interesting attack on a cable network's customers.
They cracked the cable company's DHCP server, got it to provide a
Connection-specific DNS suffic pointing to a machine they owned,
and also told it to use their DNS server.
This meant that when your machine wanted to look up yahoo.com,
it would look up yahoo.com.attackersdomain.com instead.

This looks like it has the ability to work around DNSSEC.
Somebody trying to verify that they'd correctly reached yahoo.com
would instead verify that they'd correctly reached
yahoo.com.attackersdomain.com, which can provide all the signatures
it needs to make this convincing.

So if you're depending on DNSSEC to secure your IPSEC connection,
do make sure your DNS server doesn't have a suffix of echelon.nsa.gov...


No, that's just not true of DNSsec.  DNSsec doesn't depend on the 
integrity of the connection to your DNS server; rather, the RRsets are 
digitally signed.  In other words, it works a lot like certificates, 
with a trust chain going back to a magic root key.  I'm not saying that 
there can't be problems with that model, but compromised DNS servers 
(and poisoned DNS caches) are among the major threat models it was 
designed to deal with.  If nothing else, the existence of caching DNS 
servers, which are not authoritative for the information they hand out, 
makes a transmission-based solution pretty useless.



--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)



Re: An attack on paypal

2003-06-12 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Matt Crawford writ
es:
 The worst trouble I've had with https is that you have no way to use host
 header names to differentiate between sites that require different SSL
 certificates.

True as written, but Netscrape ind Internet Exploder each have a hack
for honoring the same cert for multiple server names.  Opera seems to
honor at least one of the two hacks, and a cert can incorporate both
at once.

   /C=US/ST=Illinois/L=Batavia/O=Fermilab/OU=Services
   /CN=(alpha|bravo|charlie).fnal.gov/CN=alpha.fnal.gov
   /CN=bravo.fnal.gov/CN=charlie.fnal.gov

You can also use *.fnal.gov

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)



Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-05 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], R. A. Hettinga wr
ites:

--- begin forwarded text


Status: RO
From: Somebody
To: R. A. Hettinga [EMAIL PROTECTED]
Subject: Re: Wiretap Act Does Not Cover Message 'in Storage' For Short   Perio
d (was Re: BNA's Internet Law News (ILN) - 2/27/03)
Date: Sun, 2 Mar 2003 14:09:05 -0500

Bob,

Technically, since their signal speed is slower than light, even
transmission lines act as storage devices.

Wire tapping is now legal.


No, that's not waht the decision means.  Access to stored messages also 
requires court permission.  The (U.S.) ban on wiretapping without judicial
permission is rooted in a Supreme Court decision, Katz v. United States,
389 U.S. 347 (1967) 
(http://caselaw.lp.findlaw.com/scripts/getcase.pl?navby=casecourt=usvol=389invol=347)
which held that a wiretap is a search which thus required a warrant.  I 
don't think there's ever been any doubt that seizing a stored message 
required a warrant.  But in an old case (OLMSTEAD v. U.S., 277 U.S. 438 (1928))
the Court had held that the Fourth Amendment only protected material 
things, and therefore *not* conversations monitored via a wiretap.  
That decision was overturned in Katz.

The crucial difference, from a law enforcement perspective, is how hard 
it is to get the requisite court order.  A stored message order is 
relatively easy; a wiretap order is very hard.  Note that this 
distinction is primarily statutory, not (as far as I know) 
constitutional.  

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)



Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-05 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], R. A. Hettinga wr
ites:

--- begin forwarded text


Status: RO
From: Somebody
To: R. A. Hettinga [EMAIL PROTECTED]
Subject: Re: Wiretap Act Does Not Cover Message 'in Storage' For Short   Perio
d (was Re: BNA's Internet Law News (ILN) - 2/27/03)
Date: Sun, 2 Mar 2003 14:09:05 -0500

Bob,

Technically, since their signal speed is slower than light, even
transmission lines act as storage devices.

Wire tapping is now legal.


No, that's not waht the decision means.  Access to stored messages also 
requires court permission.  The (U.S.) ban on wiretapping without judicial
permission is rooted in a Supreme Court decision, Katz v. United States,
389 U.S. 347 (1967) 
(http://caselaw.lp.findlaw.com/scripts/getcase.pl?navby=casecourt=usvol=389invol=347)
which held that a wiretap is a search which thus required a warrant.  I 
don't think there's ever been any doubt that seizing a stored message 
required a warrant.  But in an old case (OLMSTEAD v. U.S., 277 U.S. 438 (1928))
the Court had held that the Fourth Amendment only protected material 
things, and therefore *not* conversations monitored via a wiretap.  
That decision was overturned in Katz.

The crucial difference, from a law enforcement perspective, is how hard 
it is to get the requisite court order.  A stored message order is 
relatively easy; a wiretap order is very hard.  Note that this 
distinction is primarily statutory, not (as far as I know) 
constitutional.  

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)



Re: Did you *really* zeroize that key?

2002-11-07 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Peter Gutmann writes
:
[Moderator's note: FYI: no pragma is needed. This is what C's volatile
 keyword is for. 

No it isn't.  This was done to death on vuln-dev, see the list archives for
the discussion.

[Moderator's note: I'd be curious to hear a summary -- it appears to
work fine on the compilers I've tested. --Perry]

Regardless of whether one uses volatile or a pragma, the basic point 
remains:  cryptographic application writers have to be aware of what a 
clever compiler can do, so that they know to take countermeasures.

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (Firewalls book)




Re: DOJ proposes US data-rentention law.

2002-06-20 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], David G. Koontz writes:
Trei, Peter wrote:
 - start quote -
 
 Cyber Security Plan Contemplates U.S. Data Retention Law
 http://online.securityfocus.com/news/486
 
 Internet service providers may be forced into wholesale spying 
 on their customers as part of the White House's strategy for 
 securing cyberspace.
 
 By Kevin Poulsen, Jun 18 2002 3:46PM
 
 An early draft of the White House's National Strategy to Secure 
 Cyberspace envisions the same kind of mandatory customer data 
 collection and retention by U.S. Internet service providers as was
 recently enacted in Europe, according to sources who have reviewed 
 portions of the plan. 
 
...

If the U.S. wasn't in an undeclared 'war', this would be considered
an unfunded mandate.  Does anyone realize the cost involved?  Think
of all the spam that needs to be recorded for posterity.  ISPs don't
currently record the type of information that this is talking about.
What customer data backup is being performed by ISPs is by and large
done by disk mirroring and is not kept permanently.


This isn't clear.  The proposals I've seen call for recording transaction 
data -- i.e., the SMTP envelope information, plus maybe the From: 
line.  It does not call for retention of content.

Apart from practicality, there are constitutional issues.  Envelope 
data is given to the ISP in typical client/server email scenarios, 
while content is end-to-end, in that it's not processed by the ISP.  A 
different type of warrant is therefore needed to retrieve the latter.  
The former falls under the pen register law (as amended by the 
Patriot Act), and requires a really cheap warrant.  Email content is 
considered a full-fledged wiretap, and requires a hard-to-get court 
order, with lots of notice requirements, etc.  Mandating that a third 
party record email in this situation, in the absence of a pre-existing
warrant citing probable cause, would be very chancy.  I don't think 
even the current Supreme Court would buy it.

--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (Firewalls book)