Re: Palladium: technical limits and implications

2002-08-12 Thread Ben Laurie

AARG!Anonymous wrote:
 Adam Back writes:
 
I have one gap in the picture: 

In a previous message in this Peter Biddle said:


In Palladium, SW can actually know that it is running on a given
platform and not being lied to by software. [...] (Pd can always be
lied to by HW - we move the problem to HW, but we can't make it go
away completely).

 
 Obviously no application can reliably know anything if the OS is hostile.
 Any application can be meddled with arbitrarily by the OS.  In fact
 every bit of the app can be changed so that it does something entirely
 different.  So in this sense it is meaningless to speak of an app that
 can't be lied to by the OS.
 
 What Palladium can do, though, is arrange that the app can't get at
 previously sealed data if the OS has meddled with it.  The sealing
 is done by hardware based on the app's hash.  So if the OS has changed
 the app per the above, it won't be able to get at old sealed data.

I don't buy this: how does Palladium know what an app is without the OS' 
help?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Washington DC evacuation plan... for federal employees

2002-08-12 Thread Declan McCullagh

1. Government creates new Washington evacuation plan
By Jason Peckenpaugh

The federal government has created a new procedure for evacuating federal 
employees in Washington in the case of possible terrorist attacks on the 
nation's capital.

The protocol, which took effect in May, tells who can decide to evacuate 
federal employees from agencies and how the government will communicate the 
decision to employees and to city and state agencies that would be affected 
by a mass exodus of civil servants from Washington. It is an attempt to 
improve on the ad hoc process used on Sept. 11, when the Office of 
Personnel Management closed federal agencies without first notifying state 
and transit officials in the Washington area.

Basically the only emergency plan that was available that this area had 
[on Sept. 11] was the snow emergency plan, said Scott Hatch, OPM's 
director of communications. The new protocol was designed to handle federal 
evacuations in Washington, but could be used to make evacuation decisions 
for civil servants in other cities, he said.

Full story: http://www.govexec.com/dailyfed/0802/080902p1.htm

Return to Top




Re: On the outright laughability of internet democracy

2002-08-12 Thread R. A. Hettinga

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

At 4:20 PM +0200 on 8/12/02, Nomen Nescio wrote, in excruciating,
hilarious and even elegant detail:

...all about how I was trolled. :-).

 Good fish. Thank you for playing.

LOL...

You're welcome. Guilty as charged. I admit to being absolutely
trollable about some things. It's even fun on occasion. As always,
you know where the 'd' key is. Or, apparently, I can also tell you
where to find it in several languages. I love the net...


Meaning that, as it always has been, since people began repeating
themselves about six months out from its founding, this list is just
a watering hole, and not a salon. That, and you never really know how
exactly you're going to get your kicks next. :-).


However, if I may be permitted to flop back into the bilge a little
while to add *some* content to the discussion again, my point --
well, two, actually -- still holds.

1.) You cannot have truly anonymous voting on the net without also
being perfectly free to sell your vote. In short, the only voting
that matters on the net is *financial* voting -- voting your control,
total or fractional, of an asset of some kind. Don't take my word for
it. Look it up. Read the protocols. Figure it out for yourself. It's
impossible. And, in so doing you will discover something that I've
also said said too much before, also to the consternation of folks
like you:

2.) Financial cryptography is the *only* cryptography that matters.

[If you respond to a patently content-free fulmination by an
obviously trollee with another troll of your own, what, exactly, does
that make you, troller -- or trollee? :-)]


Cheers,
RAH

-BEGIN PGP SIGNATURE-
Version: PGP 7.5

iQA/AwUBPVffd8PxH8jf3ohaEQId/gCg8bSQsIpLv67eVoLDwO8YSTL1S7UAnRA3
rpyy0mOPtS0ydZLaPz7DCyT3
=g1DF
-END PGP SIGNATURE-

-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'




Re: dangers of TCPA/palladium

2002-08-12 Thread Ben Laurie

David Wagner wrote:
 Ben Laurie  wrote:
 
Mike Rosing wrote:

The purpose of TCPA as spec'ed is to remove my control and
make the platform trusted to one entity.  That entity has the master
key to the TPM.

Now, if the spec says I can install my own key into the TPM, then yes,
it is a very useful tool.

Although the outcome _may_ be like this, your understanding of the TPM 
is seriously flawed - it doesn't prevent your from running whatever you 
want, but what it does do is allow a remote machine to confirm what you 
have chosen to run.

It helps to argue from a correct starting point.
 
 
 I don't understand your objection.  It doesn't look to me like Rosing
 said anything incorrect.  Did I miss something?
 
 It doesn't look like he ever claimed that TCPA directly prevents one from
 running what you want to; rather, he claimed that its purpose (or effect)
 is to reduce his control, to the benefit of others.  His claims appear
 to be accurate, according to the best information I've seen.

The part I'm objecting to is that it makes the platform trusted to one 
entity. In fact, it can be trusted by any number of entities, and you 
(the owner of the machine) get to choose which ones.

Now, it may well be that if this is allowed to proceed unchecked that in 
practice there's only a small number of entities there's any point in 
choosing, but that is a different matter.

Chers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: Challenge to David Wagner on TCPA

2002-08-12 Thread Brian A. LaMacchia

I just want to point out that, as far as Palladium is concerned, we really
don't care how the keys got onto the machine. Certain *applications* written
on top of Palladium will probably care, but all the hardware  the security
kernel really care about is making sure that secrets are only divulged to
the code that had them encrypted in the first place.  It's all a big trust
management problem (or a series of trust management problems) --
applications that are going to rely on SCP keys to protect secrets for them
are going to want some assurances about where the keys live and whether
there's a copy outside the SCP.  I can certainly envision potential
applications that would want guarantees that the key was generated on the
SCP  never left, and I can see other applications that want guarantees that
the key has a copy sitting on another SCP on the other side of the building.

So the complexity isn't in how the keys get initialized on the SCP (hey, it
could be some crazy little hobbit named Mel who runs around to every machine
and puts them in with a magic wand).  The complexity is in the keying
infrastructure and the set of signed statements (certificates, for lack of a
better word) that convey information about how the keys were generated 
stored.  Those statements need to be able to represent to other applications
what protocols were followed and precautions taken to protect the private
key.  Assuming that there's something like a cert chain here, the root of
this chain chould be an OEM, an IHV, a user, a federal agency, your company,
etc. Whatever that root is, the application that's going to divulge secrets
to the SCP needs to be convinced that the key can be trusted (in the
security sense) not to divulge data encrypted to it to third parties.
Palladium needs to look at the hardware certificates and reliably tell
(under user control) what they are. Anyone can decide if they trust the
system based on the information given; Palladium simply guarantees that it
won't tell anyone your secrets without your explicit request..

--bal

P.S. I'm not sure that I actually *want* the ability to extract the private
key from an SCP after it's been loaded, because presumably if I could ask
for the private key then a third party doing a black-bag job on my PC could
also ask for it.  I think what I want is the ability to zeroize the SCP,
remove all state stored within it, and cause new keys to be generated
on-chip.  So long as I can zero the chip whenever I want (or zero part of
it, or whatever) I can eliminate the threat posed by the manufacturer who
initialized the SCP in the first place.

Lucky Green [EMAIL PROTECTED] wrote:
 Ray wrote:

 From: James A. Donald [EMAIL PROTECTED]
 Date: Tue, 30 Jul 2002 20:51:24 -0700

 On 29 Jul 2002 at 15:35, AARG! Anonymous wrote:
 both Palladium and TCPA deny that they are designed to restrict
 what applications you run.  The TPM FAQ at
 http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf reads
 

 They deny that intent, but physically they have that capability.

 To make their denial credible, they could give the owner
 access to the private key of the TPM/SCP.  But somehow I
 don't think that jibes with their agenda.

 Probably not surprisingly to anybody on this list, with the exception
 of potentially Anonymous, according to the TCPA's own TPM Common
 Criteria Protection Profile, the TPM prevents the owner of a TPM from
 exporting the TPM's internal key. The ability of the TPM to keep the
 owner of a PC from reading the private key stored in the TPM has been
 evaluated to E3 (augmented). For the evaluation certificate issued by
 NIST, see:

 http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-VR-TPM.pdf

 If I buy a lock I expect that by demonstrating ownership I
 can get a replacement key or have a locksmith legally open it.

 It appears the days when this was true are waning. At least in the PC
 platform domain.

 --Lucky


 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to
 [EMAIL PROTECTED]




Re: Palladium: technical limits and implications

2002-08-12 Thread Adam Back

On Mon, Aug 12, 2002 at 01:52:39PM +0100, Ben Laurie wrote:
 AARG!Anonymous wrote:
  [...]
  What Palladium can do, though, is arrange that the app can't get at
  previously sealed data if the OS has meddled with it.  The sealing
  is done by hardware based on the app's hash.  So if the OS has changed
  the app per the above, it won't be able to get at old sealed data.
 
 I don't buy this: how does Palladium know what an app is without the OS' 
 help?

Here's a slightly updated version of the diagram I posted earlier:

+---++  
| trusted-agent | user mode  |  
|space  | app space  |  
|(code  ++  
| compartment)  | supervisor |  
|   | mode / OS  |  
+---++
| ring -1 / TOR  |
++  
| hardware / SCP key manager |
++  

Integrity Metrics in a given level are computed by the level below.

The TOR starts Trusted Agents, the Trusted Agents are outside the OS
control.  Therefore a remote application based on remote attestation
can know about the integrity of the trusted-agent, and TOR.

ring -1/TOR is computed by SCP/hardware; Trusted Agent is computed by
TOR;

The parallel stack to the right: OS is computed by TOR; Application is
computed OS.

So for general applications you still have to trust the OS, but the OS
could itself have it's integrity measured by the TOR.  Of course given
the rate of OS exploits especially in Microsoft products, it seems
likley that the aspect of the OS that checks integrity of loaded
applications could itself be tampered with using a remote exploit.

Probably the latter problem is the reason Microsoft introduced ring -1
in palladium (it seems to be missing in TCPA).

Adam
--
http://www.cypherspace.org/adam/




Re: responding to claims about TCPA

2002-08-12 Thread AARG! Anonymous

David Wagner wrote:
 To respond to your remark about bias: No, bringing up Document Revocation
 Lists has nothing to do with bias.  It is only right to seek to understand
 the risks in advance.  I don't understand why you seem to insinuate
 that bringing up the topic of Document Revocation Lists is an indication
 of bias.  I sincerely hope that I misunderstood you.

I believe you did, because if you look at what I actually wrote, I did not
say that bringing up the topic of DRLs is an indication of bias:

 The association of TCPA with SNRLs is a perfect example of the bias and
 sensationalism which has surrounded the critical appraisals of TCPA.
 I fully support John's call for a fair and accurate evaluation of this
 technology by security professionals.  But IMO people like Ross Anderson
 and Lucky Green have disqualified themselves by virtue of their wild and
 inaccurate public claims.  Anyone who says that TCPA has SNRLs is making
 a political statement, not a technical one.

My core claim is the last sentence.  It's one thing to say, as you
are, that TCPA could make applications implement SNRLs more securely.
I believe that is true, and if this statement is presented in the context
of dangers of TCPA or something similar, it would be appropriate.
But even then, for a fair analysis, it should make clear that SNRLs can
be done without TCPA, and it should go into some detail about just how
much more effective a SNRL system would be with TCPA.  (I will write more
about this in responding to Joseph Ashwood.)

And to be truly unbiased, it should also talk about good uses of TCPA.

If you look at Ross Anderson's TCPA FAQ at
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html, he writes (question 4):

: When you boot up your PC, Fritz takes charge. He checks that the boot
: ROM is as expected, executes it, measures the state of the machine;
: then checks the first part of the operating system, loads and executes
: it, checks the state of the machine; and so on. The trust boundary, of
: hardware and software considered to be known and verified, is steadily
: expanded. A table is maintained of the hardware (audio card, video card
: etc) and the software (O/S, drivers, etc); Fritz checks that the hardware
: components are on the TCPA approved list, that the software components
: have been signed, and that none of them has a serial number that has
: been revoked.

He is not saying that TCPA could make SNRLs more effective.  He says
that Fritz checks... that none of [the software components] has a
serial number that has been revoked.  He is flatly stating that the
TPM chip checks a serial number revocation list.  That is both biased
and factually untrue.

Ross's whole FAQ is incredibly biased against TCPA.  I don't see how
anyone can fail to see that.  If it were titled FAQ about Dangers of
TCPA at least people would be warned that they were getting a one-sided
presentation.  But it is positively shameful for a respected security
researcher like Ross Anderson to pretend that this document is giving
an unbiased and fair description.

I would be grateful if someone who disagrees with me, who thinks that
Ross's FAQ is fair and even-handed, would speak up.  It amazes me that
people can see things so differently.

And Lucky's slide presentation, http://www.cypherpunks.to, is if anything
even worse.  I already wrote about this in detail so I won't belabor
the point.  Again, I would be very curious to hear from someone who
thinks that his presentation was unbiased.




Re: Palladium: technical limits and implications

2002-08-12 Thread AARG! Anonymous

Adam Back writes:
 +---++  
 | trusted-agent | user mode  |  
 |space  | app space  |  
 |(code  ++  
 | compartment)  | supervisor |  
 |   | mode / OS  |  
 +---++
 | ring -1 / TOR  |
 ++  
 | hardware / SCP key manager |
 ++  

I don't think this works.  According to Peter Biddle, the TOR can be
launched even days after the OS boots.  It does not underly the ordinary
user mode apps and the supervisor mode system call handlers and device
drivers.

+---++  
| trusted-agent | user mode  |  
|space  | app space  |  
|(code  ++  
| compartment)  | supervisor |  
|   | mode / OS  |  
+---+   +---++
|SCP|---| ring -1 / TOR |
+---+   +---+



This is more how I would see it.  The SCP is more like a peripheral
device, a crypto co-processor, that is managed by the TOR.  Earlier you
quoted Seth's blog:

| The nub is a kind of trusted memory manager, which runs with more
| privilege than an operating system kernel. The nub also manages access
| to the SCP.

as justification for putting the nub (TOR) under the OS.  But I think in
this context more privilege could just refer to the fact that it is in
the secure memory, which is only accessed by this ring--1 or ring-0 or
whatever you want to call it.  It doesn't follow that the nub has anything
to do with the OS proper.  If the OS can run fine without it, as I think
you agreed, then why would the entire architecture have to reorient itself
once the TOR is launched? 

In other words, isn't my version simpler, as it adjoins the column at
the left to the pre-existing column at the right, when the TOR launches,
days after boot?  Doesn't it require less instantaneous, on-the-fly,
reconfiguration of the entire structure of the Windows OS at the moment
of TOR launch?  And what, if anything, does my version fail to accomplish
that we know that Palladium can do?


 Integrity Metrics in a given level are computed by the level below.

 The TOR starts Trusted Agents, the Trusted Agents are outside the OS
 control.  Therefore a remote application based on remote attestation
 can know about the integrity of the trusted-agent, and TOR.

 ring -1/TOR is computed by SCP/hardware; Trusted Agent is computed by
 TOR;

I had thought the hardware might also produce the metrics for trusted
agents, but you could be right that it is the TOR which does so.
That would be consistent with the incremental extension of trust
philosophy which many of these systems seem to follow.

 The parallel stack to the right: OS is computed by TOR; Application is
 computed OS.

No, that doesn't make sense.  Why would the TOR need to compute a metric
of the OS?  Peter has said that Palladium does not give information about
other apps running on your machine:

: Note that in Pd no one but the user can find out the totality of what SW is
: running except for the nub (aka TOR, or trusted operating root) and any
: required trusted services. So a service could say I will only communicate
: with this app and it will know that the app is what it says it is and
: hasn't been perverted. The service cannot say I won't communicate with this
: app if this other app is running because it has no way of knowing for sure
: if the other app isn't running.


 So for general applications you still have to trust the OS, but the OS
 could itself have it's integrity measured by the TOR.  Of course given
 the rate of OS exploits especially in Microsoft products, it seems
 likley that the aspect of the OS that checks integrity of loaded
 applications could itself be tampered with using a remote exploit.

Nothing Peter or anyone else has said indicates that this is a property of
Palladium, as far as I can remember.

 Probably the latter problem is the reason Microsoft introduced ring -1
 in palladium (it seems to be missing in TCPA).

No, I think it is there to prevent debuggers and supervisor-mode drivers
from manipulating secure code.  TCPA is more of a whole-machine spec
dealing with booting an OS, so it doesn't have to deal with the question
of running secure code next to insecure code.




Re: Palladium: technical limits and implications

2002-08-12 Thread Adam Back

Peter Biddle, Brian LaMacchia or other Microsoft employees could
short-cut this guessing game at any point by coughing up some details.
Feel free guys... enciphering minds want to know how it works.

(Tim Dierks: read the earlier posts about ring -1 to find the answer
to your question about feasibility in the case of Palladium; in the
case of TCPA your conclusions are right I think).

On Mon, Aug 12, 2002 at 10:55:19AM -0700, AARG!Anonymous wrote:
 Adam Back writes:
  +---++  
  | trusted-agent | user mode  |  
  |space  | app space  |  
  |(code  ++  
  | compartment)  | supervisor |  
  |   | mode / OS  |  
  +---++
  | ring -1 / TOR  |
  ++  
  | hardware / SCP key manager |
  ++  
 
 I don't think this works.  According to Peter Biddle, the TOR can be
 launched even days after the OS boots.

I thought we went over this before?  My hypothesis is: I presumed
there would be a stub TOR loaded bvy the hardware.  The hardware would
allow you to load a new TOR (presumably somewhat like loading a new
BIOS -- the TOR and hardware has local trusted path to some IO
devices).

 It does not underly the ordinary user mode apps and the supervisor
 mode system call handlers and device drivers.

I don't know what leads you to this conclusion.

 +---++  
 | trusted-agent | user mode  |  
 |space  | app space  |  
 |(code  ++  
 | compartment)  | supervisor |  
 |   | mode / OS  |  
 +---+   +---++
 |SCP|---| ring -1 / TOR |
 +---+   +---+

How would the OS or user mode apps communicate with trusted agents
with this model?  The TOR I think would be the mediator of these
communications (and of potential communications between trusted
agents).  Before loading a real TOR, the stub TOR would not implement
talking to trusted agents.

I think this is also more symmstric and therefore more likely.  The
trusted agent space is the same as supervisor mode that the OS runs
in.  It's like virtualization in OS360: there are now multiple OSes
operating under a micro-kernel (the TOR in ring -1): the real OS and
the multiple trusted agents.  The TOR is supposed to be special
purpose, simple and small enough to be audited as secure and stand a
chance of being so.

The trusted agents are the secure parts of applications (dealing with
sealing, remote attestation, DRM, authenticated path to DRM
implementing graphics cards, monitors, sound cards etc; that kind of
thing).  Trusted agents should also be small, simple special purpose
to avoid them also suffering from remote compromise.  There's limited
point putting a trusted agent in a code compartment if it becomes a
full blown complex application like MS word, because then the trusted
agent would be nearly as likely to be remotely exploited as normal
OSes.

 [...] It doesn't follow that the nub has anything to do with the OS
 proper.  If the OS can run fine without it, as I think you agreed,
 then why would the entire architecture have to reorient itself once
 the TOR is launched?

trusted-agents will also need to use OS services, the way you have it
they can't.

 In other words, isn't my version simpler, as it adjoins the column at
 the left to the pre-existing column at the right, when the TOR launches,
 days after boot?  Doesn't it require less instantaneous, on-the-fly,
 reconfiguration of the entire structure of the Windows OS at the moment
 of TOR launch?  

I don't think it's a big problem to replace a stub TOR with a given
TOR sometime after OS boot.  It's analogous to modifying kernel code
with a kernel module, only a special purpose micro-kernel in ring -1
instead of ring 0.  No big deal.

  The parallel stack to the right: OS is computed by TOR; Application is
  computed OS.
 
 No, that doesn't make sense.  Why would the TOR need to compute a metric
 of the OS?  

In TCPA which does not have a ring -1, this is all the TPM does
(compute metrics on the OS, and then have the OS compute metrics on
applications.

While Trusted Agent space is separate and better protected as there
are fewer lines of code that a remote exploit has to be found in to
compromise one of them, I hardly think Palladium would discard the
existing windows driver signing, code signing scheme.  It also seems
likely therefore that even though it offers lower assurance the code
signing would be extended to include metrics and attestation for the
OS, drivers and even applications.

 Peter has said that Palladium does not give information about other
 apps running on your machine:

I take this to mean that as stated somewhere in the available docs the
OS can not observe or even know how many trusted agents are running.
So he's stating that they've made OS design decisions such that the OS
could not refuse to run some 

Re: Thanks, Lucky, for helping to kill gnutella

2002-08-12 Thread Sunder

Ok Mr. Smarty Pants Aarg! Anonymous remailer user, you come up with such a
method.  Cypherpunsk write code, yes?  So write some code.

Meanwhile, this is why it can't be done:

If you have a client that sends a signature of it's binary back to it's
mommy, you can also have a rogue client that sends the same signature back
to it's mommy, but is a different binary.

So how does mommy know which is the real client, and which is the rogue
client?

After all, the rogue could simply keep a copy of the real client's binary,
and send the checksum/hash for the real copy, but not run it.


If you embedd one half of a public key in the real client, what's to stop
the attacker from reverse engineering the real client and extracting the
key, then sign/encrypt things with that half of the key?  Or to patch the
client using a debugger so it does other things also?  Or runs inside an
emulator where every operation it does is logged - so that a new rogue can
be built that does the same?  Or runs under an OS whose kernel is patched
to allow another process to access your client's memory and
routines? Or has modded dynamic libraries which your client depends on 
to do the same, etc.


Show us the code instead of asking us to write it for you.  I say, you
can't do it.  Prove me wrong.  As long as you do not have full exclusive
control of the client hardware, you can't do what you ask with any degree
of confidence beyond what security through obscurity buys you.  In the
end, if someone cares enough, they will break it.


All this pointless bickering has already been discussed:  A long while
ago, Dennis Ritchie of KR discussed how he introduced a backdoor into
login.c, then modified the C compiler to recognize when login.c was
compiled, and had it inject the back door, then removed the changes to
login.c.

How do you propose to have a client run in a hostile environment and
securely authenticate itself without allowing rogues to take over it's
function or mimic it?


Either propose a way to do what you're asking us to do - which IMHO is
impossible without also having some sort of cop out such as having trusted
hardware, or go away and shut the fuck up.

--Kaos-Keraunos-Kybernetos---
 + ^ + :NSA got $20Bill/year|Passwords are like underwear. You don't /|\
  \|/  :and didn't stop 9-11|share them, you don't hang them on your/\|/\
--*--:Instead of rewarding|monitor, or under your keyboard, you   \/|\/
  /|\  :their failures, we  |don't email them, or put them on a web  \|/
 + v + :should get refunds! |site, and you must change them very often.
[EMAIL PROTECTED] http://www.sunder.net 

On Fri, 9 Aug 2002, AARG! Anonymous wrote:

 If only there were a technology in which clients could verify and yes,
 even trust, each other remotely.  Some way in which a digital certificate
 on a program could actually be verified, perhaps by some kind of remote,
 trusted hardware device.  This way you could know that a remote system was
 actually running a well-behaved client before admitting it to the net.
 This would protect Gnutella from not only the kind of opportunistic
 misbehavior seen today, but the future floods, attacks and DOSing which
 will be launched in earnest once the content companies get serious about
 taking this network down.




Re: dangers of TCPA/palladium

2002-08-12 Thread AARG! Anonymous

Mike Rosing wrote:

 The difference is fundamental: I can change every bit of flash in my BIOS.
 I can not change *anything* in the TPM.  *I* control my BIOS.  IF, and
 only IF, I can control the TPM will I trust it to extend my trust to
 others.  The purpose of TCPA as spec'ed is to remove my control and
 make the platform trusted to one entity.  That entity has the master
 key to the TPM.
 
 Now, if the spec says I can install my own key into the TPM, then yes,
 it is a very useful tool.  It would be fantastic in all the portables
 that have been stolen from the FBI for example.  Assuming they use a
 password at turn on, and the TPM is used to send data over the net,
 then they'd know where all their units are and know they weren't
 compromised (or how badly compromised anyway).
 
 But as spec'ed, it is very seriously flawed.

Ben Laurie replied:

 Although the outcome _may_ be like this, your understanding of the TPM 
 is seriously flawed - it doesn't prevent your from running whatever you 
 want, but what it does do is allow a remote machine to confirm what you 
 have chosen to run.

David Wagner commented:

 I don't understand your objection.  It doesn't look to me like Rosing
 said anything incorrect.  Did I miss something?

 It doesn't look like he ever claimed that TCPA directly prevents one from
 running what you want to; rather, he claimed that its purpose (or effect)
 is to reduce his control, to the benefit of others.  His claims appear
 to be accurate, according to the best information I've seen.

I don't believe that is an accurate paraphrase of what Mike Rosing said.
He said the purpose (not effect) was to remove (not reduce) his control,
and make the platform trusted to one entity (not for the benefit of
others).  Unless you want to defend the notion that the purpose of TCPA
is to *remove* user control of his machine, and make it trusted to only
*one other entity* (rather than a general capability for remote trust),
then I think you should accept that what he said was wrong.

And Mike said more than this.  He said that if he could install his own
key into the TPM that would make it a very useful tool.  This is wrong;
it would completely undermine the trust guarantees of TCPA, make it
impossible for remote observers to draw any useful conclusions about the
state of the system, and render the whole thing useless.  He also talked
about how this could be used to make systems phone home at boot time.
But TCPA has nothing to do with any such functionality as this.

In contrast, Ben Laurie's characterization of TCPA is 100% factual and
accurate.  Do you at least agree with that much, even if you disagree
with my criticism of Mike Rosing's comments?




trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Adam Back

I think you are making incorrect presumptions about how you would use
Palladium hardware to implement a secure DRM system.  If used as you
suggest it would indeed suffer the vulnerabilities you describe.

The difference between an insecure DRM application such as you
describe and a secure DRM application correctly using the hardware
security features is somewhat analogous to the current difference
between an application that relies on not being reverse engineered for
it's security vs one that encrypts data with a key derived from a user
password.

In a Palladium DRM application done right everything which sees keys
and plaintext content would reside inside Trusted Agent space, inside
DRM enabled graphics cards which retrict access to video RAM, and
later DRM enabled monitors with encrypted digital signal to the
monitor, and DRM enabled soundcards, encrypted content to speakers.
(The encrypted contentt to media related output peripherals is like
HDCP, only done right with non-broken crypto).

Now all that will be in application space that you can reverse
engineer and hack on will be UI elements and application logic that
drives the trusted agent, remote attesation, content delivery and
hardware.  At no time will keys or content reside in space that you
can virtualize or debug.


In the short term it may be that some of these will be not fully
implemented so that content does pass through OS or application space,
or into non DRM video cards and non DRM monitors, but the above is the
end-goal as I understand it.

As you can see there is still the limit of the non-remote
exploitability of the trusted agent code, but this is within the
control of the DRM vendor.  If he does a good job of making a simple
software architecture and avoiding potential for buffer overflows he
stands a much better chance of having a secure DRM platofrm than if as
you describe exploited OS code or rogue driver code can subvert his
application.


There is also I suppose possibility to push content decryption on to
the DRM video card so the TOR does little apart from channel key
exchange messages from the SCP to the video card, and channel remote
attestation and key exchanges between the DRM license server and the
SCP.  The rest would be streaming encrypted video formats such as CSS
VOB blocks (only with good crypto) from the network or disk to the
video card.


Similar kinds of arguments about the correct break down between
application logic and placement of security policy enforcing code in
Trusted Agent space apply to general applications.  For example you
could imagine a file sharing application which hid the data the users
machine was serving from the user.  If you did it correctly, this
would be secure to the extent of the hardware tamper resistance (and
the implementers ability to keep the security policy enforcing code
line-count down and audit it well).


At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.


So therein lies the crux of secure software design in the Palladium
style secure application space: choosing a good break-down between
security policy enforcement, and application code.  There must be a
balance, and what makes sense and is appropriate depends on the
application and the limits of the ingenuity of the protocol designer
in coming up with clever designs that cover to hardware tamper
resistant levels the the applications desired policy enforcement while
providing a workably small and pracitcally auditable associated
trusted agent module.


So there are practical limits stemming from realities to do with code
complexity being inversely proportional to auditability and security,
but the extra ring -1, remote attestation, sealing and integrity
metrics really do offer some security advantages over the current
situation.

Adam

On Mon, Aug 12, 2002 at 03:28:15PM -0400, Tim Dierks wrote:
 At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
 (Tim Dierks: read the earlier posts about ring -1 to find the answer
 to your question about feasibility in the case of Palladium; in the
 case of TCPA your conclusions are right I think).
 
 The addition of an additional security ring with a secured, protected 
 memory space does not, in my opinion, change the fact that such a ring 
 cannot accurately determine that a particular request is consistant with 
 any definable security policy. I do not think it is technologically 
 feasible for ring -1 to determine, upon receiving a request, that the 
 request was generated by trusted software operating in accordance with the 
 intent of whomever signed it.
 
 Specifically, let's presume that a Palladium-enabled application is being 
 used for DRM; a secure 

Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Adam Back

At this point we largely agree, security is improved, but the limit
remains assuring security of over-complex software.  To sum up:

The limit of what is securely buildable now becomes what is securely
auditable.  Before, without the Palladium the limit was the security
of the OS, so this makes a big difference.

Yes some people may design over complex trusted agents, with sloppy
APIs and so forth, but the nice thing about trusted agents are they
are compartmentalized:

If the MPAA and Microsoft shoot themselves in the foot with a badly
designed over complex DRM trusted agent component for MS Media Player,
it has no bearing on my ability to implement a secure file-sharing or
secure e-cash system in a compartment with rigorously analysed APIs,
and well audited code.  The leaky compromised DRM app can't compromise
the security policies of my app.

Also it's unclear from the limited information available but it may be
that trusted agents, like other ring-0 code (eg like the OS itself)
can delegate tasks to user mode code running in trusted agent space,
which can't examine other user level space, nor the space of the
trusted agent which stated them, and also can't be examined by the OS.

In this way for example remote exploits could be better contained in
the sub-division of trusted agent code.  eg. The crypto could be done
by the trusted-agent proper, the mpeg decoding by a user-mode
component; compromise the mpeg-decoder, and you just get plaintext not
keys.  Various divisions could be envisaged.


Given that most current applications don't even get the simplest of
applications of encryption right (store key and password in the
encrypted file, check if the password is right by string comparison is
suprisingly common), the prospects are not good for general
applications.  However it becomes more feasible to build secure
applications in the environment where it matters, or the consumer
cares sufficiently to pay for the difference in development cost.

Of course all this assumes microsoft manages to securely implement a
TOR and SCP interface.  And whether they manage to succesfully use
trusted IO paths to prevent the OS and applications from tricking the
user into bypassing intended trusted agent functionality (another
interesting sub-problem).  CC EAL3 on the SCP is a good start, but
they have pressures to make the TOR and Trusted Agent APIs flexible,
so we'll see how that works out.

Adam
--
http://www.cypherspace.org/adam/

On Mon, Aug 12, 2002 at 04:32:05PM -0400, Tim Dierks wrote:
 At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
 At some level there has to be a trade-off between what you put in
 trusted agent space and what becomes application code.  If you put the
 whole application in trusted agent space, while then all it's
 application logic is fully protected, the danger will be that you have
 added too much code to reasonably audit, so people will be able to
 gain access to that trusted agent via buffer overflow.

 I agree; I think the system as you describe it could work and would be
 secure, if correctly executed. However, I think it is infeasible to
 generally implement commercially viable software, especially in the
 consumer market, that will be secure under this model. Either the
 functionality will be too restricted to be accepted by the market, or there
 will be a set of software flaws that allow the system to be penetrated.

 The challenge is to put all of the functionality which has access to
 content inside of a secure perimeter, while keeping the perimeter secure
 from any data leakage or privilege escalation. [...]




Re: Seth on TCPA at Defcon/Usenix

2002-08-12 Thread Mike Rosing

On Mon, 12 Aug 2002, AARG! Anonymous wrote:

 It is clear that software hacking is far from almost trivial and you
 can't assume that every software-security feature can and will be broken.

Anyone doing security had better assume software can and will be
broken.  That's where you *start*.

 Furthermore, even when there is a break, it won't be available to
 everyone.  Ordinary people aren't clued in to the hacker community
 and don't download all the latest patches and hacks to disable
 security features in their software.  Likewise for business customers.
 In practice, if Microsoft wanted to implement a global, facist DRL,
 while some people might be able to patch around it, probably 95%+ of
 ordinary users would be stuck with it.

Yes, this the problem with security today.  That's why lots of people
are advocating that the OS should be built from the ground up with
security as the prime goal rather than ad hoc addons as it is now.
Nobody wants to pay for it tho :-)

 In short, while TCPA could increase the effectiveness of global DRLs,
 they wouldn't be *that* much more effective.  Most users will neither
 hack their software nor their hardware, so the hardware doesn't make
 any difference for them.  Hackers will be able to liberate documents
 completely from DRL controls, whether they use hardware or software
 to do it.  The only difference is that there will be fewer hackers,
 if hardware is used, because it is more difficult.  Depending on the
 rate at which important documents go on DRLs, that may not make any
 difference at all.

So what's the point of TCPA if a few hackers can steal the most
expensive data?  Are you now admitting TCPA is broken?  You've got
me very confused now!

I'm actually really confused about the whole DRM business anyway.  It
seems to me that any data available to human perceptions can be
duplicated.  Period.  The idea of DRM (as I understand it) is that you can
hand out data to people you don't trust, and they can't copy it.  To me,
DRM seems fundamentally impossible.

Patience, persistence, truth,
Dr. mike




Re: CDR: Re: Seth on TCPA at Defcon/Usenix

2002-08-12 Thread Jamie Lawrence

On Mon, 12 Aug 2002, AARG! Anonymous wrote:

 His analysis actually applies to a wide range of security features,
 such as the examples given earlier: secure games, improved P2P,
 distributed computing as Adam Back suggested, DRM of course, etc..
 TCPA is a potentially very powerful security enhancement, so it does
 make sense that it can strengthen all of these things, and DRLs as well.
 But I don't see that it is fair to therefore link TCPA specifically with
 DRLs, when there are any number of other security capabilities that are
 also strengthened by TCPA.

Sorry, but now you're just trolling. 

Acid is great for removing all manner of skin problems. It also happens
to cause death, but linking fatalities to it is unfair, considering
that's not what acid was _intended_ to do. 

Creating cheat-proof gaming at the cost of allowing document revoking
enabled software sounds like a bad idea.

-j