TCPA: RIP

2005-02-25 Thread Tyler Durden
Good presentation. I liked the boot diagrams quite a bit.
Prediction (and remember you heard it here first): TCPA will fail. Oh it'll 
see some spot uses, don't get me wrong. These spot uses might even remain 
for a while. But the good thing is that Microsoft is probably going to have 
to carry the ball on this one, and they'll think they have a few iterations 
to iron out the bugs. But users will defect in droves as all sorts of 
unexpected and wacky things start to happen and they'll defect in droves.

Right about then even some of the studios are going to begin to understand 
that by choking off the spigot they'll be choking their own product flow 
which will have to increasingly compete with independents (who can 
distribute music over the internet just as easily as SONY can). So some 
genius will make a convincing enough boardroom presentation showing that the 
additional revenues they gain through TCPA is far more than overset by the 
effective loss of advertising. That realization will hit just as the general 
public starts learning what TCPA is and why their computer is as buggy and 
crashy as it was during the Windows 95 days.

Boo hoo.
-TD



Using TCPA

2005-02-04 Thread Eric Murray
On Thu, Feb 03, 2005 at 11:51:57AM -0500, Trei, Peter wrote:
 
 It could easily be leveraged to make motherboards
 which will only run 'authorized' OSs, and OSs
 which will run only 'authorized' software.

[..]

 If you 'take ownership' as you put it, the internal
 keys and certs change, and all of a sudden you
 might not have a bootable computer anymore.

I have an application for exactly that behaviour.
It's a secure appliance.  Users don't run
code on it.  It needs to be able
to verify that it's running the authorized OS and software
and that new software is authorized.
(it does it already, but a TCPA chip might do it better).

So a question for the TCPA proponents (or opponents):
how would I do that using TCPA?


Eric



Re: TCG(TCPA) anonymity and Lucky Green

2004-06-30 Thread R. A. Hettinga
At 4:18 PM -0400 6/29/04, An Metet wrote:
On August 6, 2002, Lucky Green wrote a reply to Anonymous (whom I will
now come clean and admit was none other than me)

Prove it.

;-)

Cheers,
RAH

-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



TCG(TCPA) anonymity and Lucky Green

2004-06-30 Thread An Metet
On August 6, 2002, Lucky Green wrote a reply to Anonymous (whom I will
now come clean and admit was none other than me), about the suggestion
that TCPA (now called TCG) could incorporate anonymous cryptographic
credentials to protect users' privacy, rather than the cumbersome
privacy CA mechanism they actually adopted.

I had written:

 In any case, I agree that something like this would be an 
 excellent enhancement to the technology.  IMO it is very much 
 in the spirit of TCPA. I suspect they would be very open to 
 this suggestion.

Lucky Green replied:

 Though routinely professing otherwise, evidently Anonymous knows nothing
 of the spirit of the TCPA: I proposed the use of blinding schemes to the
 TCPA as far back as 2 years ago as a substitute to the Privacy CAs
 schemes which are subject to potential collusion. I believe
 unreceptive, rather than very much open to this suggestion would
 more accurately describe the TCPA's spirit Anonymous holds so high.

However, it now turns out that TCG has in fact incorporated
exactly the kind of mechanism which Lucky predicted they would be
unreceptive to.  The new TCG 1.2 spec includes Direct Anonymous
Attestation based on Camenisch credentials.  See it described at
http://www.hpl.hp.com/techreports/2004/HPL-2004-93.pdf.  Here is the
abstract:

   This paper describes the direct anonymous attestation scheme (DAA).
   This scheme was adopted by the Trusted Computing Group as the method
   for remote authentication of a hardware module, called trusted platform
   module (TPM), while preserving the privacy of the user of the platform
   that contains the module. Direct anonymous attestation can be seen
   as a group signature without the feature that a signature can be
   opened, i.e., the anonymity is not revocable. Moreover, DAA allows for
   pseudonyms, i.e., for each signature a user (in agreement with the
   recipient of the signature) can decide whether or not the signature
   should be linkable to another signature. DAA furthermore allows for
   detection of known keys: if the DAA secret keys are extracted
   from a TPM and published, a verifier can detect that a signature
   was produced using these secret keys. The scheme is provably secure
   in the random oracle model under the strong RSA and the decisional
   Diffie-Hellman assumption.

This is a real cryptographic tour de force.  It protects privacy,
includes irrevocable anonymity, and yet if keys get pulled out of
the system and published, they can be invalidated, even while fully
protecting the anonymity of users of valid keys!  It sounds impossible,
but these guys are wizards.

We haven't heard much from Lucky on TCG/TCPA lately.  It would be
interesting to get his reaction to the latest moves.  One ironic
trend is that although TCPA was claimed to be designed to kill open
source, in fact all the work on the technology is happening on Linux!
See enforcer.sourceforge.net for an example of using TCG to validate a
Linux kernel and executables.  IBM's work on tcgLinux is another project
along these lines.  Pretty exciting stuff.



Palladium/TCPA/NGSCB

2003-10-23 Thread Bill Frantz
Mark Miller pointed out to me that currently much of our protection from
viruses comes from people at the anti-virus companies who quickly grab each
new virus, reverse engineer it, and send out information about its payload
and effects.  Any system which hides code from reverse engineering will
make this process more difficult.  To the extend that Palladium/TCPA/NGSCB
hides code, and to the extent it succeeds at this hiding, the more it
encourages new and more pervasive viruses.

Cheers - Bill


-
Bill Frantz| There's nothing so clear as a | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet. -- Dean Tribble | Los Gatos, CA 95032



Re: Palladium/TCPA/NGSCB

2003-10-23 Thread Major Variola (ret)
At 11:06 PM 10/22/03 -0700, Bill Frantz wrote:
Mark Miller pointed out to me that currently much of our protection
from
viruses comes from people at the anti-virus companies who quickly grab
each
new virus, reverse engineer it, and send out information about its
payload
and effects.

You could be talking about biology as well.

Any system which hides code from reverse engineering will
make this process more difficult.  To the extend that
Palladium/TCPA/NGSCB
hides code, and to the extent it succeeds at this hiding, the more it
encourages new and more pervasive viruses.

A virus that contains friendly IFF codes can evade an immune system.
Some cloak themselves in membranes derived from cells they were born in.

Thus they present the right IFF response.

A virus that appears to Palladium to be friendly and worthy of the full
protection
-the right hashes, etc- will be a fun thing.

Some virii are innocuous except when they pick up a piece of virulence
code.  Then they kill.  IIRC anthrax is like this, some of the streps.
One can imagine writing a virus which is in fact merely a bit of
virulence code taken in by an other innocuous but replicating program.

Its common in biolabs to cross a hard-to-grow nasty with an easy-to-grow

labbug so you can study the nasty.  Sometimes, the result is dangerous.
See
the synthetic mousepox which killed the mice.

And virii that infect the immune system can be fun too --imagine a virus

infecting your antiviral program.  HIV for Windows.



Re: Palladium/TCPA/NGSCB

2003-10-23 Thread Eric Murray
On Thu, Oct 23, 2003 at 11:59:47AM -0700, Major Variola (ret) wrote:
 And virii that infect the immune system can be fun too --imagine a virus
 infecting your antiviral program.  HIV for Windows.


Or a virus that modifes your other programs to make them appear to
be known virii.  You'd have to turn off your AV progams
to keep them from destroying your files (or moving them
around, going crazy with warnings when you start any program, etc)

I'd bet that no AV programs have safeguards against this
sort of false positive attack.

Eric



Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-11 Thread Michel Messerschmidt
On Sun, Feb 09, 2003 at 02:32:13PM -0800, Mike Rosing wrote:
 TPM != TCPA.  TCPA with *user* control is good.

The TPM is a mandatory part of the TCPA specifications.
There will be no TCPA without TPM.

And there will be no TCPA-enabled system with complete user control. 
Just look at the main specification:
 - users can't access nor alter the Endorsement Key
 - the TPM can't be disabled completely. This allows operating systems
   that bind (product activation ?) themselves to an unique TPM and
   refuse to start if it's not fully activated.
 
If a system doesn't meet these reqirements (as the IBM paper suggests) 
it isn't a TCPA system.


  Therefore for DRM purposes TCPA and Palladium are both socially bad
  technologies.
 
 It's bad only if the *user* does not have control over their own machines.
 If each enterprise can control their own machines, completely
 independently of all other external organizations, then TCPA could be
 really useful.  If only Bill Gates controls all machines, it's bad for the
 rest of us (but pretty damn good for Bill!!)

TCPA uses some interesting possibilities that may enhance system 
security. But with the current specifications, it likely destroys any 
privacy that's left on todays systems.


-- 
Michel Messerschmidt   [EMAIL PROTECTED]
antiVirusTestCenter, Computer Science, University of Hamburg




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-11 Thread Mike Rosing
On Tue, 11 Feb 2003, Michel Messerschmidt wrote:

 The TPM is a mandatory part of the TCPA specifications.
 There will be no TCPA without TPM.

That makes sense, TPM is just key storage.

 And there will be no TCPA-enabled system with complete user control.
 Just look at the main specification:
  - users can't access nor alter the Endorsement Key
  - the TPM can't be disabled completely. This allows operating systems
that bind (product activation ?) themselves to an unique TPM and
refuse to start if it's not fully activated.

 If a system doesn't meet these reqirements (as the IBM paper suggests)
 it isn't a TCPA system.

Not having access to the secret key inside the TPM is what makes the
hardware secure.  Not being able to disable it is a problem for sure.
To me that implies the user does not have control.  So my idea of a
good TCPA is not part of the spec.  Too bad.  That makes it
impossible to sell to anyone with a brain cell left.

 TCPA uses some interesting possibilities that may enhance system
 security. But with the current specifications, it likely destroys any
 privacy that's left on todays systems.

If they want to sell it, they'll have to fix the specs.  Any IT pro
is going to explain to the CEO how it allows somebody else access
to all a companies data, and poof, TCPA goes away.

Patience, persistence, truth,
Dr. mike




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-09 Thread Mike Rosing
On Sun, 9 Feb 2003, Anonymous via the Cypherpunks Tonga Remailer wrote:

 However note: you can't defend TCPA as being good vs Palladium bad
 (as you did by in an earlier post) by saying that TCPA only provides
 key storage.

TPM != TCPA.  TCPA with *user* control is good.

 As Michel noted TCPA and Palladium both provide remote attestation and
 sealing, and it is this pair of functions which provides the DRM
 functionality.

 Therefore for DRM purposes TCPA and Palladium are both socially bad
 technologies.

It's bad only if the *user* does not have control over their own machines.
If each enterprise can control their own machines, completely
independently of all other external organizations, then TCPA could be
really useful.  If only Bill Gates controls all machines, it's bad for the
rest of us (but pretty damn good for Bill!!)

Patience, persistence, truth,
Dr. mike




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-08 Thread Michel Messerschmidt
On Wed, Feb 05, 2003 at 07:15:50AM -0800, Mike Rosing wrote:
 On Tue, 4 Feb 2003, AARG! Anonymous wrote:
 
  The main features of TCPA are:
 
  - key storage
 
 The IBM TPM does this part.

AFAIK, IBM's embedded security subsystem 1.0 is only a key 
storage device (Atmel AT90SP0801 chip).
But the TPM we're talking about is part of the TCPA compliant 
embedded security subsystem 2.0 which supports all specified 
TPM functions, even if the released TPM driver can't use all 
of them (for now).

BTW, why should I need a TPM only for secure key storage ?
Any smartcard is better suited for this.


-- 
Michel Messerschmidt
[EMAIL PROTECTED]
http://www.michel-messerschmidt.de




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-08 Thread Mike Rosing
On Sat, 8 Feb 2003, Michel Messerschmidt wrote:

 AFAIK, IBM's embedded security subsystem 1.0 is only a key
 storage device (Atmel AT90SP0801 chip).
 But the TPM we're talking about is part of the TCPA compliant
 embedded security subsystem 2.0 which supports all specified
 TPM functions, even if the released TPM driver can't use all
 of them (for now).

 BTW, why should I need a TPM only for secure key storage ?
 Any smartcard is better suited for this.

Because it's soldered into the portable.  For an enterprise
that means they *know* each portable out in the field is held
by the correct user.  With a smart card, they only know the
card is held by the correct user.

It ain't perfect, but it's useful for the real world.

Patience, persistence, truth,
Dr. mike




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-08 Thread Anonymous via the Cypherpunks Tonga Remailer
Mike Rosing wrote:
  BTW, why should I need a TPM only for secure key storage ?
  Any smartcard is better suited for this.

 Because it's soldered into the portable.  For an enterprise that means
 they *know* each portable out in the field is held by the correct
 user.  With a smart card, they only know the card is held by the
 correct user.

A key store chip could be useful for some applications.

However note: you can't defend TCPA as being good vs Palladium bad
(as you did by in an earlier post) by saying that TCPA only provides
key storage.

As Michel noted TCPA and Palladium both provide remote attestation and
sealing, and it is this pair of functions which provides the DRM
functionality.

Therefore for DRM purposes TCPA and Palladium are both socially bad
technologies.




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-06 Thread Anonymous via the Cypherpunks Tonga Remailer
Mike Rosing wrote:
  - secure boot
  - sealing
  - remote attestation

 It does *not* do these parts.

I think you may have been mislead by the slant of paper.

Quoting from the paper:

http://www.research.ibm.com/gsal/tcpa/why_tcpa.pdf

you will see:

| The TCPA chip is not particularly suited to DRM. While it does have
| the ability to report signed PCR information, and this information
| could be used to prevent playback unless a trusted operating system
| and application were in use, this type of scheme would be a
| nightmare for content providers to manage. Any change to the BIOS,
| the operating system, or the application would change the reported
| values. How could content providers recognize which reported PCR
| values were good, given the myriad platforms, operating system
| versions, and frequent software patches?

which clearly admits that the IBM TPM does implement the full set of
TCPA functionality as specified in the openly published TCPA spec, and
for the purposes of our discussion specifically as you see it does
implement the remote attestation feature.

(Though the author makes some unimaginative claims that it is not
suited for DRM because of upgrades may make that difficult to manage.
Any sane software architecture built on top of this tech can easily
tackle that problem.)

 That's why IBM wants the TPM != TCPA to be loud and clear.  That's
 why the RIAA can't expect it to solve their problem.

I'd think the more likely reason they want to downplay that TCPA is a
DRM enabling technology is because it's bad publicity for a hardware
manufacturer.




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-06 Thread Mike Rosing
On Thu, 6 Feb 2003, Anonymous via the Cypherpunks Tonga Remailer wrote:

 I think you may have been mislead by the slant of paper.

 Quoting from the paper:

 http://www.research.ibm.com/gsal/tcpa/why_tcpa.pdf

 you will see:

 | The TCPA chip is not particularly suited to DRM. While it does have
 | the ability to report signed PCR information, and this information
 | could be used to prevent playback unless a trusted operating system
 | and application were in use, this type of scheme would be a
 | nightmare for content providers to manage. Any change to the BIOS,
 | the operating system, or the application would change the reported
 | values. How could content providers recognize which reported PCR
 | values were good, given the myriad platforms, operating system
 | versions, and frequent software patches?

 which clearly admits that the IBM TPM does implement the full set of
 TCPA functionality as specified in the openly published TCPA spec, and
 for the purposes of our discussion specifically as you see it does
 implement the remote attestation feature.

They can say all they want in a white paper.  I was looking at the source
code.  That can only query the tpm chip.  The chip itself contains no rom,
you can't jump into it.  In order to meet the requirement of tcpa it
needs a secure execution region, and the IBM TPM simply doesn't have it.

 (Though the author makes some unimaginative claims that it is not
 suited for DRM because of upgrades may make that difficult to manage.
 Any sane software architecture built on top of this tech can easily
 tackle that problem.)

And any hacker can bypass it, which is what the guys at IBM are saying.

 I'd think the more likely reason they want to downplay that TCPA is a
 DRM enabling technology is because it's bad publicity for a hardware
 manufacturer.

I doubt it.  If they could do what RIAA wants they could make a lot of
money.  Morals come second to money.

Patience, persistence, truth,
Dr. mike




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-02-05 Thread Mike Rosing
On Tue, 4 Feb 2003, AARG! Anonymous wrote:

 The main features of TCPA are:

 - key storage

The IBM TPM does this part.

 - secure boot
 - sealing
 - remote attestation

It does *not* do these parts.  That's why IBM wants the TPM != TCPA
to be loud and clear.  That's why the RIAA can't expect it to solve
their problem.

Patience, persistence, truth,
Dr. mike




Re: Sovereignty issues and Palladium/TCPA

2003-01-31 Thread David Howe
at Friday, January 31, 2003 2:18 AM, Peter Gutmann
[EMAIL PROTECTED] was seen to say:
schnipp
   More particularly, governments are likely to want to explore the
 issues related to potential foreign control/influence over domestic
 governmental use/access to domestic government held data.
   In other words, what are the practical and policy implications for a
 government if a party external to the government may have the
 potential power to turn off our access to its own information and
 that of its citizens.
And indeed - download patches silently to change the disable
functionality to email anything interesting directly to the CIA
functionality.




Re: Sovereignty issues and Palladium/TCPA

2003-01-31 Thread Dave Howe
I have seen this *five* times already - is there some sort of wierd mailing
loop in action?
I am fairly certain I haven't sent it five times spread out over two
days




[IP] Open Source TCPA driver and white papers (fwd)

2003-01-24 Thread Eugen Leitl
-- Forwarded message --
Date: Fri, 24 Jan 2003 02:29:27 -0500
From: Dave Farber [EMAIL PROTECTED]
To: ip [EMAIL PROTECTED]
Subject: [IP] Open Source TCPA driver and white papers


-- Forwarded Message
From: David Safford [EMAIL PROTECTED]
Date: Tue, 21 Jan 2003 12:05:39 -0500
To: [EMAIL PROTECTED]
Subject: [open-source] Open Source TCPA driver and white papers


IBM has released a Linux device driver under GPL for its TCPA chip (TPM).
The driver is available at
http://www.research.ibm.com/gsal/tcpa/

This page also has links to two papers, one presenting positive uses
of the chip, and the second rebutting misinformation about the chip.

These papers, combined with the Linux driver and the TCPA specification
at http://www.trustedcomputing.org, give everyone the ability to
test an actual chip (such as in the Thinkpad T30), to see for themselves
what it can, and cannot do.

Note: the papers and driver do not discuss Palladium.
  Palladium and TCPA are two separate topics.

dave safford
[EMAIL PROTECTED]



-- End of Forwarded Message

-
You are subscribed as [EMAIL PROTECTED]
To unsubscribe or update your address, click
  http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-people.org/archives/interesting-people/




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-01-24 Thread Mike Rosing
On Fri, 24 Jan 2003, Eugen Leitl wrote:

 -- Forwarded message --
 Date: Fri, 24 Jan 2003 02:29:27 -0500
 From: Dave Farber [EMAIL PROTECTED]
 To: ip [EMAIL PROTECTED]
 Subject: [IP] Open Source TCPA driver and white papers


 -- Forwarded Message
 From: David Safford [EMAIL PROTECTED]
 Date: Tue, 21 Jan 2003 12:05:39 -0500
 To: [EMAIL PROTECTED]
 Subject: [open-source] Open Source TCPA driver and white papers


 IBM has released a Linux device driver under GPL for its TCPA chip (TPM).
 The driver is available at
 http://www.research.ibm.com/gsal/tcpa/

 This page also has links to two papers, one presenting positive uses
 of the chip, and the second rebutting misinformation about the chip.

Thanks Eugen,  It looks like the IBM TPM chip is only a key
store read/write device.  It has no code space for the kind of
security discussed in the TCPA.  The user still controls the machine
and can still monitor who reads/writes the chip (using a pci bus
logger for example).  There is a lot of emphasis on TPM != Palladium,
and TPM != DRM.  TPM can not control the machine, and for DRM to work
the way RIAA wants, TPM won't meet their needs.  TPM looks pretty useful
as it sits for real practical security tho, so I can see why IBM
wants those !='s to be loud and clear.

Patience, persistence, truth,
Dr. mike




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-01-24 Thread David Howe
at Friday, January 24, 2003 4:53 PM, Mike Rosing [EMAIL PROTECTED]
was seen to say:
 Thanks Eugen,  It looks like the IBM TPM chip is only a key
 store read/write device.  It has no code space for the kind of
 security discussed in the TCPA.  The user still controls the machine
 and can still monitor who reads/writes the chip (using a pci bus
 logger for example).  There is a lot of emphasis on TPM != Palladium,
 and TPM != DRM.  TPM can not control the machine, and for DRM to work
 the way RIAA wants, TPM won't meet their needs.  TPM looks pretty
 useful as it sits for real practical security tho, so I can see why
 IBM wants those !='s to be loud and clear.
Bearing in mind though that DRM/Paladium won't work at all if it can't
trust its hardware - so TPM != Paladium, but TPM (or an improved TPM) is
a prerequisite.




Re: [IP] Open Source TCPA driver and white papers (fwd)

2003-01-24 Thread Mike Rosing
On Fri, 24 Jan 2003, David Howe wrote:

 Bearing in mind though that DRM/Paladium won't work at all if it can't
 trust its hardware - so TPM != Paladium, but TPM (or an improved TPM) is
 a prerequisite.

Certainly!  But this TPM is really nothing more than a dongle
attached to the pci bus.  It will be straight forward to bypass
it for many nefarious operations.  Which makes me ahppy, but I suspect
it won't make the RIAA very happy :-)

Patience, persistence, truth,
Dr. mike




Re: Cryptographic privacy protection in TCPA

2002-09-04 Thread Anton Stiglic

 Nomen Nescio wrote:
  It looks like Camenisch  Lysyanskaya are patenting their credential
  system.  This is from the online patent applications database:
 
 
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=/ne
tahtml/PTO/search-bool.htmlr=1f=Gl=50co1=ANDd=PG01s1=camenischOS=came
nischRS=camenisch

Jan Camenisch works for IBM, it's no surprise that the scheme is being
patented.
The scheme is not very efficient compared to Brands', but I would guess
implementable
if you don't mind doing allot of computation.
It is based on zero-knowledge proofs.  The basic idea of using
zero-knowledge proofs
to create an unlikable anonymous credentials system is actually pretty
intuitive and simple, and
people have taught about it before Camenisch  Lysyanskay have.  You can
probably think
about it yourself and come up with a similar scheme (not necessarily
provably secure however)
The novelty in my point of view is simply the choice of the setting in which
they work in (group of
quadratic residues modulo a composite), so that their scheme can apparently
be proven secure
under the strong RSA assumptions and the decisional DH assumption.
Camenischs work on group signatures and Proving in zero-knowledge that a
number n is
the product of two safe primes seem to have lead to the result.

--Anton




Re: Cryptographic privacy protection in TCPA

2002-09-02 Thread Nomen Nescio

It looks like Camenisch  Lysyanskaya are patenting their credential
system.  This is from the online patent applications database:

http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=/netahtml/PTO/search-bool.htmlr=1f=Gl=50co1=ANDd=PG01s1=camenischOS=camenischRS=camenisch

 Non-transferable anonymous credential system with optional anonymity
 revocation

 Abstract

 The present invention relates to a method and system for securely
 proving ownership of pseudonymous or anonymous electronic credentials. A
 credential system is described consisting of users and organizations. An
 organization knows a user only by a pseudonym.  The pseudonyms of the
 same user, established for use with different organizations, cannot be
 linked. An organization can issue a credential to a pseudonym, and the
 corresponding user can prove possession of this credential to another
 organization that knows him under another pseudonym. During the prove of
 possession of the credential nothing besides the fact that he owns such
 a credential is revealed. A refinement of the credential system provides
 credentials for unlimited use, so called multiple-show credentials,
 and credentials for one-time use, so called one-show credentials.

Some of the claims seem a little broad, like this first one:

 1. A method for establishing a pseudonym system by having a certificate
 authority accepting a user as a new participant in said pseudonym system,
 the method comprising the steps of: receiving a first public key provided
 by said user; verifying that said user is allowed to join the system;
 computing a credential by signing the first public key using a secret
 key owned by said certificate authority; publishing said first public
 key and said credential.

Wouldn't this general description cover most proposed credential systems
in the past, such as those by Chaum or Brands?

Does anyone know how to contact the PTO regarding proposed patents,
perhaps to point out prior art?




Re: Cryptographic privacy protection in TCPA

2002-09-02 Thread V. Alex Brennen

On Mon, 2 Sep 2002, Nomen Nescio wrote:

 It looks like Camenisch  Lysyanskaya are patenting their credential
 system.  This is from the online patent applications database:
 
 
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=/netahtml/PTO/search-bool.htmlr=1f=Gl=50co1=ANDd=PG01s1=camenischOS=camenischRS=camenisch

 Does anyone know how to contact the PTO regarding proposed patents,
 perhaps to point out prior art?

It's best not to contact the PTO or the patent holder with prior 
art.  Gregory Aharonian has written some interesting material on
this in his Patent News newsletter.

If you contact the patent holder or the PTO with the prior art,
it will likely be listed in the patent, or future patents if the
application has already been granted.  In the case of an existing
patents, presenting prior art to the PTO can result it the 
prior art being given a previously reviewed status.  Prior art
with a previously reviewed status, or prior art listed on the
patent are both much less effective in a defense case against an 
infringement claim.  Therefor alerting the patent holder or the
PTO to prior art would actually make the patent stronger and less
likely to be invalidated.

Basically, the patent system is so corrupt, the best thing to
do is to avoid participating it in.  Just like the US democratic
system.


- VAB




Re: Cryptographic privacy protection in TCPA

2002-08-28 Thread Nomen Nescio

Carl Ellison suggested an alternate way that TCPA could work to allow
for revoking virtualized TPMs without the privacy problems associated
with the present systems, and the technical problems of the elaborate
cryptographic methods.

Consider first the simplest possible method, which is just to put a
single signature key in each TPM and allow the TPM to use that to sign
its messages on the net.  This is reliable and allows TPM keys to be
revoked, but it obviously offers no privacy.  Every usage of a TPM key
can be correlated as coming from a single system.

TCPA fixed this by adding a trusted third party, the Identity CA who
would be the only one to see the TPM key.  But Carl offers a different
solution.

Instead of burning only one key into the TPM, burn several.  Maybe even
a hundred.  And let these keys be shared with other TPMs.  Each TPM has
many keys, and each key has copies in many TPMs.

Now let the TPMs use their various keys to identify themselves in
transactions on the net.  Because each key belongs to many different
TPMs, and the set of TPMs varies for each key, this protects privacy.
Any given usage of a key can be narrowed down only to a large set of
TPMs that possess that key.

If a key is misused, i.e. scraped out of the TPM and used to create a
virtualized, rule-breaking software TPM, it can be revoked.  This means
that all the TPMs that share that one key lose the use of that key.
But it doesn't matter much, because they each have many more they can use.
Since it is expected that only a small percentage of TPMs will ever need
their keys revoked, most TPMs should always have plenty of keys to use.

One problem is that a virtualized TPM which loses one of its keys will
still have others that it can use.  Eventually those keys will also be
recognized as being mis-used and be revoked as well.  But it may take
quite a while before all the keys on its list are exhausted.

To fix this, Carl suggests that the TPM manufacturer keep a list of all
the public keys that are in each TPM.  Then when a particular TPM has
some substantial fraction of its keys revoked, that would be a sign that
the TPM itself had been virtualized and all the rest of the keys could be
immediately revoked.  The precise threshold for this would depend on the
details of the system, the number of keys per TPM, the number of TPMs that
share a key, the percentage of revoked keys, etc.  But it should not be
necessary to allow each TPM to go through its entire repertoire of keys,
one at a time, before a virtualized TPM can be removed from the system.

Carl indicated that he suggested this alternative early in the TCPA
planning process, but it was not accepted.  It does seem that while
the system has advantages, in some ways it shares the problems of the
alternatives.  It provides privacy, but not complete privacy, not as
much as the cryptographic schemes.  And it provides security to the TPM
issuers, but not complete security, not as much as the Privacy CA method.
In this way it can be seen as a compromise.  Often, compromise solutions
are perceived more in terms of their disadvantages than their benefits.




RE: Seth on TCPA at Defcon/Usenix

2002-08-21 Thread Bill Stewart

At 12:58 AM 08/11/2002 -0700, Lucky Green wrote:

BTW, does anybody here know if there is still an email time stamping
server in operation? The references that I found to such servers appear
to be dead.

The canonical timestamping system was Haber  Stornetta's work at
Bellcore, commercialized at Surety.com.  The site is current,
has some Digital Notary Service and Secure Email things on it,
and something much more amazing - it looks like they received
$7M in financing in June :-)

There's a nice collection of pointers to timestamping systems at
http://saturn.tcs.hut.fi/~helger/crypto/link/timestamping/
though I don't know how current the references are -
the page was last updated 14.8.2002.

The free PGP-based system http://www.itconsult.co.uk/stamper.htm
has a news item from 04-Jun-02, which comments that,
although they haven't posted any news items in five years,
they've been in continuous operation




Re: Cryptographic privacy protection in TCPA

2002-08-18 Thread Adam Back

With Brands digital credentials (or Chaums credentials) another
approach is to make the endorsement key pair and certificate the
anonymous credential.  That way you can use the endorsement key and
certificate directly rather than having to obtain (blinded) identity
certificates from a privacy CA and trust the privacy CA not to issue
identity certificates without seeing a corresponding endorsement
credential.

However the idea with the identity certificates is that you can use
them once only and keep fetching new ones to get unlinkable anonymity,
or you can re-use them a bit to get pseudonymity where you might use a
different psuedonym for a different service where you are anyway
otherwise linkable to a given service.

With Brands credentials the smart card setting allows you to have more
compact and computationally cheap control of the credential from
within a smart card which you could apply to the TPM/SCP.  So you can
fit more (unnamed) pseudonym credentials on the TPM to start with.

You could perhaps more simply rely on Brands credential lending
discouraging feature (ability to encode arbitrary values in the
credential private key) to prevent break once virtualize anywhere.

For discarding pseudonyms and when you want to use lots of pseudonyms
(one-use unlinkable) you need to refresh the certificates you could
use the refresh protocol which allows you to exchange a credential for
a new one without trusting the privacy CA for your privacy.

Unfortunately I think you again are forced to trust the privacy CA not
to create fresh virtualized credentials.  Perhaps there would be
someway to have the privacy CA be a different CA to the endorsement CA
and for the privacy CA to only be able to refresh existing credentials
issued by the endorsement CA, but not to create fresh ones.

Or perhaps some restriction could be placed on what the privacy CA
could do of the form if the privacy CA issued new certificates it
would reveal it's private key.

Also relevant is An Efficient System for Non-transferable Anonymous
Credentials with Optional Anonymity Revocation, Jan Camenisch and
Anna Lysyanskaya, Eurocrypt 01

http://eprint.iacr.org/2001/019/

These credentials allow the user to do unlinkable multi-show without
involving a CA.  They are somewhat less efficient than Chaum or Brands
credentials though.  But for this application does this removes the
need to trusting a CA, or even have a CA: the endorsement key and
credential can be inserted by the manufacturer, can be used
indefinitely many times, and are not linkable.

 A secondary requirement is for some kind of revocation in the case
 of misuse.

As you point out unlinkable anonymity tends to complicate revocation.

I think Camenisch's optional anonymity revocation has similar
properties in allowing a designated entity to link credentials.

Another less TTP-based approach to unlinkable but revocable
credentials is Stubblebine's, Syverson and Goldschlag, Unlinkable
Serial Transactions, ACM Trans on Info Systems, 1999:

http://www.stubblebine.com/99tissec-ust.pdf

(It's quite simple you just have to present and relinquish a previous
pseudonym credential to get a new credential; if the credential is due
to be revoked you will not get a fresh credential.)


I think I would define away the problem of local breaks.  I mean the
end-user does own their own hardware, and if they do break it you
can't detect it anyway.  If it's anything like playstation mod-chips
some proportion of the population would in fact would do this.  May be
1-5% or whatever.  I think it makes sense to just live with this, and
of course not make it illegal.  Credentials which are shared are
easier to revoke -- knowledge of the private keys typically will
render most schemes linkable and revocable.  This leaves only online
lending which is anyway harder to prevent.

Adam

On Fri, Aug 16, 2002 at 03:56:09PM -0700, AARG!Anonymous wrote:
 Here are some more thoughts on how cryptography could be used to
 enhance user privacy in a system like TCPA.  Even if the TCPA group
 is not receptive to these proposals, it would be useful to have an
 understanding of the security issues.  And the same issues arise in
 many other kinds of systems which use certificates with some degree
 of anonymity, so the discussion is relevant even beyond TCPA.




Re: Cryptographic privacy protection in TCPA

2002-08-17 Thread AARG! Anonymous

Dr. Mike wrote, patiently, persistently and truthfully:

 On Fri, 16 Aug 2002, AARG! Anonymous wrote:

  Here are some more thoughts on how cryptography could be used to
  enhance user privacy in a system like TCPA.  Even if the TCPA group
  is not receptive to these proposals, it would be useful to have an
  understanding of the security issues.  And the same issues arise in
  many other kinds of systems which use certificates with some degree
  of anonymity, so the discussion is relevant even beyond TCPA.

 OK, I'm going to discuss it from a philosophical perspective.
 i.e. I'm just having fun with this.

Fine, but let me put this into perspective.  First, although the
discussion is in terms of a centralized issuer, the same issues arise if
there are multiple issuers, even in a web-of-trust situation.  So don't
get fixated on the fact that my analysis assumed a single issuer -
that was just for simplicity in what was already a very long message.

The abstract problem to be solved is this: given that there is some
property which is being asserted via cryptographic certificates
(credentials), we want to be able to show possession of that property
in an anonymous way.  In TCPA the property is being a valid TPM.
Another example would be a credit rating agency who can give out a good
credit risk credential.  You want to be able to show it anonymously in
some cases.  Yet another case would be a state drivers license agency
which gives out an over age 21 credential, again where you want to be
able to show it anonymously.

This is actually one of the oldest problems which proponents of
cryptographic anonymity attempted to address, going back to David Chaum's
seminal work.  TCPA could represent the first wide-scale example of
cryptographic credentials being shown anonymously.  That in itself ought
to be of interest to cypherpunks.  Unfortunately TCPA is not going for
full cryptographic protection of anonymity, but relying on Trusted Third
Parties in the form of Privacy CAs.  My analysis suggests that although
there are a number of solutions in the cryptographic literature, none of
them are ideal in this case.  Unless we can come up with a really strong
solution that satisfies all the security properties, it is going to be
hard to make a case that the use of TTPs is a mistake.


 I don't like the idea that users *must* have a certificate.  Why
 can't each person develop their own personal levels of trust and
 associate it with their own public key?  Using multiple channels,
 people can prove their key is their word.  If any company wants to
 associate a certificate with a customer, that can have lots of meanings
 to lots of other people.  I don't see the usefullness of a permanent
 certificate.  Human interaction over electronic media has to deal
 with monkeys, because that's what humans are :-)

A certificate is a standardized and unforgeable statement that some
person or key has a particular property, that's all.  The kind of system
you are talking about, of personal knowledge and trust, can't really be
generalized to an international economy.


  Actually, in this system the Privacy CA is not really protecting
  anyone's privacy, because it doesn't see any identities.  There is no
  need for multiple Privacy CAs and it would make more sense to merge
  the Privacy CA and the original CA that issues the permanent certs.
  That way there would be only one agency with the power to forge keys,
  which would improve accountability and auditability.

 I really, REALLY, *REALLY*, don't like the idea of one entity having
 the ability to create or destroy any persons ability to use their
 computer at whim.  You are suggesting that one person (or small group)
 has the power to create (or not) and revoke (or not!) any and all TPM's!

 I don't know how to describe my astoundment at the lack of comprehension
 of history.

Whoever makes a statement about a property should have the power to
revoke it.  I am astounded that you think this is a radical notion.

If one or a few entities become widely trusted to make and revoke
statements that people care about, it is because they have earned that
trust.  If the NY Times says something is true, people tend to believe it.

If Intel says that such-and-such a key is in a valid TPM, people may
choose to believe this based on Intel's reputation.  If Intel later
determines that the key has been published on the net and so can no
longer be presumed to be a TPM key, it revokes its statement.

This does not mean that Intel would destroy any person's ability to use
their computer on a whim.  First, having the TPM cert revoked would not
destroy your ability to use your computer; at worst you could no longer
persuade other people of your trustworthiness.  And second, Intel would
not make these kind of decision on a whim, any more than the NY Times
would publish libelous articles on a whim; doing so would risk destroying
the company's reputation, one of its most valuable assets.

I can't

Re: Schneier on Palladium and the TCPA

2002-08-17 Thread Anonymous
 the applications in place.
There is no reason why Palladium would be any different.

 Pd is inexorably tied up with Digital Rights Management.  Your computer
 will have several partitions, each of which will be able to read and write
 its own data.  There's nothing in Pd that prevents someone else (MPAA,
 Disney, Microsoft, your boss) from setting up a partition on your computer
 and putting stuff there that you can't get at.  Microsoft has repeatedly
 said that they are not going to mandate DRM, or try to control DRM systems,
 but clearly Pd was designed with DRM in mind.

Everyone says this last point, and maybe it's true.  But at the same
time it's worth noting that Pd does more than is necessary for DRM - and
in fact it is not optimal for DRM.  The fact that Pd is open and useful
for a wide range of other applications is one piece of evidence.  We have
even discussed a Palladiumized Napster (PDster?) which could undercut the
interests of the content companies.

Microsoft didn't have to make Palladium an open system; they could have
kept control over the keys, and required that only signed apps can run
as trusted (as most people still appear to believe; see the discussion
today on slashdot).  Maybe you can argue that Microsoft felt forced to do
an open system just for public relations reasons, that they knew they'd
take too much heat if they produced the closed system they hypothetically
wanted.  Whatever their reasons, the fact is that Pd is a lot more open
than is optimal for DRM, and people should recognize that fact.

 It's hard to sort out the antitrust implications of Pd.  Lots of people
 have written about it.  Will Microsoft jigger Pd to prevent Linux from
 running?  They don't dare.

This piece of sanity is a breath of fresh air.  If only Ross Anderson
and Lucky Green and most of the cypherpunks had a similarly sound grip
on reality, the discussion of these technologies would have been greatly
improved.

 Will it take standard Internet protocols and
 replace them with Microsoft-proprietary protocols?  I don't think so.  Will
 you need a Pd-enabled device -- the system is meant for both
 general-purpose computers and specialized media devices -- in order to view
 copyrighted content?  More likely.  Will Microsoft enforce its Pd patents
 as strongly as it can?  Almost certainly.

Right, I think one of the big issues is whether Microsoft's patents cover
Palladium and TCPA, and whether it will even be possible to make a Linux
version of a trusted computing system.  As I have written before, in some
ways Linux is a much better platform for trusted computing than Windows
(due to its transparency, so much more important now that apps can cloak
themselves from users).  But if Microsoft patents block such an effort,
that could be a serious problem.  It is encouraging that HP and perhaps
IBM are going forward with a TCPA-enabled Linux; that suggests that the
Microsoft patents don't cover at least that specific architecture.

 1.  A trusted computer does not mean a computer that is trustworthy.  The
 DoD's definition of a trusted system is one that can break your security
 policy; i.e., a system that you are forced to trust because you have no
 choice.  Pd will have trusted features; the jury is still out as to whether
 or not they are trustworthy.

Ross Anderson makes a similar point, but it is quite misleading.
It implies that trusted computing is in some sense weaker than ordinary
computing because it requires you to trust more systems.  But it misses
the point, that trusted computing for the first time gives you grounds
to trust remote systems.  That's what's really new here, the ability to
have some foundation for trust in what a remote system is doing.  And so
I think the word trust is very appropriate here, and it carries its
usual connotations and meaning.  No one is forced to trust anything.
Trusted computing will make it more reasonable for people to choose to
trust remote systems.

 2.  When you think about a secure computer, the first question you should
 ask is: Secure for whom?  Microsoft has said that Pd allows the
 computer-owner to prevent others from putting their own secure areas on the
 computer.  But really, what is the likelihood of that really
 happening?  The NSA will be able to buy Pd-enabled computers and secure
 them from all outside influence.  I doubt that you or I could, and still
 enjoy the richness of the Internet.

To a large extent this is already true.  Who knows what is in the data
files and registry entries for all the closed-source Windows apps on
the market?  You already have apps putting crap on your computer and you
have no idea what is there.  Pd lets them wrap it in a secure envelope,
but that doesn't change the fact that data files are already essentially
opaque to the typical user.

 Microsoft really doesn't care about
 what you think; they care about what the RIAA and the MPAA
 think.  Microsoft can't afford to have the media companies not make their
 content available

Re: Cryptographic privacy protection in TCPA

2002-08-17 Thread Mike Rosing

On Fri, 16 Aug 2002, AARG! Anonymous wrote:

 Here are some more thoughts on how cryptography could be used to
 enhance user privacy in a system like TCPA.  Even if the TCPA group
 is not receptive to these proposals, it would be useful to have an
 understanding of the security issues.  And the same issues arise in
 many other kinds of systems which use certificates with some degree
 of anonymity, so the discussion is relevant even beyond TCPA.

OK, I'm going to discuss it from a philosophical perspective.
i.e. I'm just having fun with this.

 The basic requirement is that users have a certificate on a long-term key
 which proves they are part of the system, but they don't want to show that
 cert or that key for most of their interactions, due to privacy concerns.
 They want to have their identity protected, while still being able to
 prove that they do have the appropriate cert.  In the case of TCPA the
 key is locked into the TPM chip, the endorsement key; and the cert
 is called the endorsement certificate, expected to be issued by the
 chip manufacturer.  Let us call the originating cert issuer the CA in
 this document, and the long-term cert the permanent certificate.

I don't like the idea that users *must* have a certificate.  Why
can't each person develop their own personal levels of trust and
associate it with their own public key?  Using multiple channels,
people can prove their key is their word.  If any company wants to
associate a certificate with a customer, that can have lots of meanings
to lots of other people.  I don't see the usefullness of a permanent
certificate.  Human interaction over electronic media has to deal
with monkeys, because that's what humans are :-)

 A secondary requirement is for some kind of revocation in the case
 of misuse.  For TCPA this would mean cracking the TPM and extracting
 its key.  I can see two situations where this might lead to revocation.
 The first is a global crack, where the extracted TPM key is published
 on the net, so that everyone can falsely claim to be part of the TCPA
 system.  That's a pretty obvious case where the key must be revoked for
 the system to have any integrity at all.  The second case is a local
 crack, where a user has extracted his TPM key but keeps it secret, using
 it to cheat the TCPA protocols.  This would be much harder to detect,
 and perhaps equally significantly, much harder to prove.  Nevertheless,
 some way of responding to this situation is a desirable security feature.

Ouch, that doesn't sound too robust.

 The TCPA solution is to use one or more Privacy CAs.  You supply your
 permanent cert and a new short-term identity key; the Privacy CA
 validates the cert and then signs your key, giving you a new cert on the
 identity key.  For routine use on the net, you show your identity cert
 and use your identity key; your permanent key and cert are never shown
 except to the Privacy CA.

 This means that the Privacy CA has the power to revoke your anonymity;
 and worse, he (or more precisely, his key) has the power to create bogus
 identities.  On the plus side, the Privacy CA can check a revocation list
 and not issue a new identity cert of the permanent key has been revoked.
 And if someone has done a local crack and the evidence is strong enough,
 the Privacy CA can revoke his anonymity and allow his permanent key to
 be revoked.

The CA has a bit too much power if you ask me.  Those are some really
good reasons not to like the idea of a permanent certificate ruled
by one (nasty?) person.

[...]
 Actually, in this system the Privacy CA is not really protecting
 anyone's privacy, because it doesn't see any identities.  There is no
 need for multiple Privacy CAs and it would make more sense to merge
 the Privacy CA and the original CA that issues the permanent certs.
 That way there would be only one agency with the power to forge keys,
 which would improve accountability and auditability.

I really, REALLY, *REALLY*, don't like the idea of one entity having
the ability to create or destroy any persons ability to use their
computer at whim.  You are suggesting that one person (or small group)
has the power to create (or not) and revoke (or not!) any and all TPM's!

I don't know how to describe my astoundment at the lack of comprehension
of history.

[...]
 It's not entirely clear how this technology could best be exploited to
 solve the problems.  One possibility, for example, would be to encode
 information about the permanent key in the restrictive blinding.
 This would allow users to use their identity keys freely; but upon
 request they could prove things about their associated permanent keys.
 They could, for example, reveal the permanent key value associated with
 their identity key, and do so unforgeably.  Or they could prove that their
 permanent key is not on a given list of revoked keys.  Similar logical
 operations are possible including partial revelation of the permanent
 key information.

There's

Cryptographic privacy protection in TCPA

2002-08-17 Thread AARG! Anonymous

Here are some more thoughts on how cryptography could be used to
enhance user privacy in a system like TCPA.  Even if the TCPA group
is not receptive to these proposals, it would be useful to have an
understanding of the security issues.  And the same issues arise in
many other kinds of systems which use certificates with some degree
of anonymity, so the discussion is relevant even beyond TCPA.

The basic requirement is that users have a certificate on a long-term key
which proves they are part of the system, but they don't want to show that
cert or that key for most of their interactions, due to privacy concerns.
They want to have their identity protected, while still being able to
prove that they do have the appropriate cert.  In the case of TCPA the
key is locked into the TPM chip, the endorsement key; and the cert
is called the endorsement certificate, expected to be issued by the
chip manufacturer.  Let us call the originating cert issuer the CA in
this document, and the long-term cert the permanent certificate.

A secondary requirement is for some kind of revocation in the case
of misuse.  For TCPA this would mean cracking the TPM and extracting
its key.  I can see two situations where this might lead to revocation.
The first is a global crack, where the extracted TPM key is published
on the net, so that everyone can falsely claim to be part of the TCPA
system.  That's a pretty obvious case where the key must be revoked for
the system to have any integrity at all.  The second case is a local
crack, where a user has extracted his TPM key but keeps it secret, using
it to cheat the TCPA protocols.  This would be much harder to detect,
and perhaps equally significantly, much harder to prove.  Nevertheless,
some way of responding to this situation is a desirable security feature.

The TCPA solution is to use one or more Privacy CAs.  You supply your
permanent cert and a new short-term identity key; the Privacy CA
validates the cert and then signs your key, giving you a new cert on the
identity key.  For routine use on the net, you show your identity cert
and use your identity key; your permanent key and cert are never shown
except to the Privacy CA.

This means that the Privacy CA has the power to revoke your anonymity;
and worse, he (or more precisely, his key) has the power to create bogus
identities.  On the plus side, the Privacy CA can check a revocation list
and not issue a new identity cert of the permanent key has been revoked.
And if someone has done a local crack and the evidence is strong enough,
the Privacy CA can revoke his anonymity and allow his permanent key to
be revoked.

Let us now consider some cryptographic alternatives.  The first is to
use Chaum blinding for the Privacy CA interaction.  As before, the user
supplies his permanent cert to prove that he is a legitimate part of
the system, but instead of providing an identity key to be certified,
he supplies it in blinded form.  The Privacy CA signs this blinded key,
the user strips the blinding, and he is left with a cert from the Privacy
CA on his identity key.  He uses this as in the previous example, showing
his privacy cert and using his privacy key.

In this system, the Privacy CA no longer has the power to revoke your
anonymity, because he only saw a blinded version of your identity key.
However, the Privacy CA retains the power to create bogus identities,
so the security risk is still there.  If there has been a global crack,
and a permanent key has been revoked, the Privacy CA can check the
revocation list and prevent that user from acquiring new identities,
so revocation works for global cracks.  However, for local cracks,
where there is suspicious behavior, there is no way to track down the
permanent key associated with the cheater.  All his interactions are
done with an identity key which is unlinkable.  So there is no way to
respond to local cracks and revoke the keys.

Actually, in this system the Privacy CA is not really protecting
anyone's privacy, because it doesn't see any identities.  There is no
need for multiple Privacy CAs and it would make more sense to merge
the Privacy CA and the original CA that issues the permanent certs.
That way there would be only one agency with the power to forge keys,
which would improve accountability and auditability.

One problem with revocation in both of these systems, especially the one
with Chaum blinding, is that existing identity certs (from before the
fraud was detected) may still be usable.  It is probably necessary to
have identity certs be valid for only a limited time so that users with
revoked keys are not able to continue to use their old identity certs.

Brands credentials provide a more flexible and powerful approach than
Chaum blinding which can potentially provide improvements.  The basic
setup is the same: users would go to a Privacy CA and show their
permanent cert, getting a new cert on an identity key which they would
use on the net.  The difference

RE: TCPA hack delay appeal

2002-08-16 Thread Lucky Green

AARG! Wrote:
 
 It seems that there is (a rather brilliant) way to bypass 
 TCPA (as spec-ed.) I learned about it from two separate 
 sources, looks like two independent slightly different hacks 
 based on the same protocol flaw.
 
 Undoubtedly, more people will figure this out.

Hopefully some of those people will not limit themselves to hypothetical
attacks against The Spec, but will actually test those supposed attacks
on shipping TPMs. Which are readily available in high-end IBM laptops.

--Lucky Green




Re: TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-16 Thread lynn . wheeler

I arrived at that decision over four years ago ... TCPA possibly didn't
decide on it until two years ago. In the assurance session in the TCPA
track at spring 2001 intel developer's conference I claimed my chip was
much more KISS, more secure, and could reasonably meet the TCPA
requirements at the time w/o additional modifications. One of the TCPA guys
in the audience grossed that I didn't have to contend with the committees
of hundreds helping me with my design.

There are actually significant similarities between my chip and the TPM
chips.

I'm doing key gen at very first, initial power-on/test of wafer off the
line (somewhere in dim past it was drilled into me that everytime something
has to be handled it increases the cost).

Also, because of extreme effort at KISS, the standard PP evaluation stuff
gets much simpler and easier because most (possibly 90 percent) of the
stuff is N/A or doesn't exist

early ref:
http://www.garlic.com/~lynn/aadsm2.htm#staw

or refs at (under subject aads chip strawman):
http://www.garlic.com/~lynn/index.html#aads

brand  other misc. stuff:
http://www.asuretee.com/

random evauation refs:
http://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal
specification for FIPS186-2/x9.62 ecdsa?
http://www.garlic.com/~lynn/2002j.html#86 formal fips186-2/x9.62 definition
for eal 5/6 evaluation



[EMAIL PROTECTED] on 8/15/2002 6:44 pm wrote:

I think a number of the apparent conflicts go away if you carefully
track endorsement key pair vs endorsement certificate (signature on
endorsement key by hw manufacturer).  For example where it is said
that the endorsement _certificate_ could be inserted after ownership
has been established (not the endorsement key), so that apparent
conflict goes away.  (I originally thought this particular one was a
conflict also, until I noticed that.)  I see anonymous found the same
thing.

But anyway this extract from the CC PP makes clear the intention and
an ST based on this PP is what a given TPM will be evaluated based on:

http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-PP-TPM1_9_4.pdf

p 20:
| The TSF shall restrict the ability to initialize or modify the TSF
| data: Endorsement Key Pair [...] to the TPM manufacturer or designee.

(if only they could have managed to say that in the spec).

Adam
--
http://www.cypherspace.org/adam/




RE: TCPA hack delay appeal

2002-08-16 Thread Mike Rosing

On Thu, 15 Aug 2002, Lucky Green wrote:

 Hopefully some of those people will not limit themselves to hypothetical
 attacks against The Spec, but will actually test those supposed attacks
 on shipping TPMs. Which are readily available in high-end IBM laptops.

But doesn't the owner of the box create the master key for it?  They
imply that in their advertising, but I've not seen anything else
about it.  It was advertised to be protection for corporate data, not
a DRM/control type thing.  It would be very interesting to know the
details on that.

I found this:
http://www.pc.ibm.com/ww/resources/security/securitychip.html
but the link to IBM Embedded Security Subsystem goes to page
not found.

but this one:
http://www.pc.ibm.com/ww/resources/security/secdownload.html
says in part:
IBM Client Security Software is available via download from the Internet
to support IBM NetVista and ThinkPad models equipped with the Embedded
Security Subsystem and the new TCPA-compliant Embedded Security Subsystem
2.0. By downloading the software after the systems have been shipped, the
customer can be assured that no unauthorized parties have knowledge of the
keys and pass phrases designated by the customer.

So it looks like IBM is ahead of Microsoft on this one.  but if
TCPA isn't fully formalized, what does TCPA-compliant mean?

In any case, they imply here that the customer needs to contact
IBM to turn the thing on, so it does seem that IBM has some kind
of master key for the portable.  I wonder if they mean IBM is
authorized to know the customer's keys?

Patience, persistence, truth,
Dr. mike




Re: Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
  The important part for this, is that TCPA has no key until it has an
owner,
  and the owner can wipe the TCPA at any time. From what I can tell this
was
  designed for resale of components, but is perfectly suitable as a point
of
  attack.

 If this is true, I'm really happy about it, and I agree it would allow
 virtualisation. I'm pretty sure it won't be for Palladium, but I don't
 know about TCPA - certainly it fits the bill for what TCPA is supposed
 to do.

I certainly don't believe many people to believe me simply because I say it
is so. Instead I'll supply a link to the authority of TCPA, the 1.1b
specification, it is available at
http://www.trustedcomputing.org/docs/main%20v1_1b.pdf . There are other
documents, unfortunately the main spec gives substantial leeway, and I
haven't had time to read the others (I haven't fully digested the main spec
yet either). From that spec, all 332 pages of it, I encourage everyone that
wants to decide for themselves to read the spec. If you reach different
conclusions than I have, feel free to comment, I'm sure there are many
people on these lists that would be interested in justification for either
position.

Personally, I believe I've processed enough of the spec to state that TCPA
is a tool, and like any tool it has both positive and negative aspects.
Provided the requirement to be able to turn it off (and for my preference
they should add a requirement that the motherboard continue functioning even
under the condition that the TCPA module(s) is/are physically removed from
the board). The current spec though does seem to have a bend towards being
as advertised, being primarily a tool for the user. Whether this will remain
in the version 2.0 that is in the works, I cannot say as I have no access to
it, although if someone is listening with an NDA nearby, I'd be more than
happy to review it.
Joe




Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Ben Laurie

Joseph Ashwood wrote:
 - Original Message -
 From: Ben Laurie [EMAIL PROTECTED]
 
Joseph Ashwood wrote:

There is nothing stopping a virtualized version being created.

 
What prevents this from being useful is the lack of an appropriate
certificate for the private key in the TPM.
 
 
 Actually that does nothing to stop it. Because of the construction of TCPA,
 the private keys are registered _after_ the owner receives the computer,
 this is the window of opportunity against that as well. The worst case for
 cost of this is to purchase an additional motherboard (IIRC Fry's has them
 as low as $50), giving the ability to present a purchase. The
 virtual-private key is then created, and registered using the credentials
 borrowed from the second motherboard. Since TCPA doesn't allow for direct
 remote queries against the hardware, the virtual system will actually have
 first shot at the incoming data. That's the worst case. The expected case;
 you pay a small registration fee claiming that you accidentally wiped your
 TCPA. The best case, you claim you accidentally wiped your TCPA, they
 charge you nothing to remove the record of your old TCPA, and replace it
 with your new (virtualized) TCPA. So at worst this will cost $50. Once
 you've got a virtual setup, that virtual setup (with all its associated
 purchased rights) can be replicated across an unlimited number of computers.
 
 The important part for this, is that TCPA has no key until it has an owner,
 and the owner can wipe the TCPA at any time. From what I can tell this was
 designed for resale of components, but is perfectly suitable as a point of
 attack.

If this is true, I'm really happy about it, and I agree it would allow 
virtualisation. I'm pretty sure it won't be for Palladium, but I don't 
know about TCPA - certainly it fits the bill for what TCPA is supposed 
to do.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
 Joseph Ashwood wrote:
  There is nothing stopping a virtualized version being created.

 What prevents this from being useful is the lack of an appropriate
 certificate for the private key in the TPM.

Actually that does nothing to stop it. Because of the construction of TCPA,
the private keys are registered _after_ the owner receives the computer,
this is the window of opportunity against that as well. The worst case for
cost of this is to purchase an additional motherboard (IIRC Fry's has them
as low as $50), giving the ability to present a purchase. The
virtual-private key is then created, and registered using the credentials
borrowed from the second motherboard. Since TCPA doesn't allow for direct
remote queries against the hardware, the virtual system will actually have
first shot at the incoming data. That's the worst case. The expected case;
you pay a small registration fee claiming that you accidentally wiped your
TCPA. The best case, you claim you accidentally wiped your TCPA, they
charge you nothing to remove the record of your old TCPA, and replace it
with your new (virtualized) TCPA. So at worst this will cost $50. Once
you've got a virtual setup, that virtual setup (with all its associated
purchased rights) can be replicated across an unlimited number of computers.

The important part for this, is that TCPA has no key until it has an owner,
and the owner can wipe the TCPA at any time. From what I can tell this was
designed for resale of components, but is perfectly suitable as a point of
attack.
Joe




TCPA hack delay appeal

2002-08-15 Thread AARG! Anonymous

It seems that there is (a rather brilliant) way to bypass TCPA (as spec-ed.) I learned 
about it from two separate sources, looks like two independent slightly different 
hacks based on the same protocol flaw.

Undoubtedly, more people will figure this out.

It seems wise to suppress the urge and craving for fame and NOT to publish the 
findings at this time. Let them build the thing into zillion chips first. If you must, 
post the encrypted time-stamped solution identifying you as the author but do not 
release the key before TCPA is in many, many PCs.




TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Adam Back

Phew... the document is certainly tortuous, and has a large number of
similarly and confusingly named credentials, certificates and keys,
however from what I can tell this is what is going on:

Summary: I think the endorsement key and it's hardware manufacturers
certificate is generated at manufacture and is not allowed to be
changed.  Changing ownership only means (typically) deleting old
identities and creating new ones.

The longer version...

- endorsement key generation and certification - There is one
endorsement key per TPM which is created and certified during
manufacture.  The creation and certification process is 1) create
endorsement key pair, 2) export public key endorsement key, 3)
hardware manufacturer signs endorsement public key to create an
endorsement certificate (to certify that that endorsement public key
belongs to this TPM), 4) the certificate is stored in the TPM (for
later use in communications with the privacy CA.)

- ownership - Then there is the concept of ownership.  The spec says
the TPM MUST ship with no Owner installed.  The owner when he wishes
to claim ownership choose a authentication token which is sent into
the TPM encrypted with the endorsement key.  (They give the example of
the authentication token being the hash of a password).  Physical
presence tests apply to claiming ownership (eg think BIOS POST with no
networking enabled, or physical pin on motherboard like BIOS flash
enable).  The authentication token and ownership can be changed.  The
TPM can be reset back to a state with no current owner.  BUT _at no
point_ does the TPM endorsement private key leave the TPM.  The
TPM_CreateEndorsementKeyPair function is allowed to be called once
(during manufacture) and is thereafter disabled.

- identity keys - Then there is the concept of identity keys.  The
current owner can create and delete identities, which can be anonymous
or pseudonymous.  Presumably the owner would delete all identity keys
before giving the TPM to a new owner.  The identity public key is
certified by the privacy CA.

- privacy ca - The privacy CA accepts identity key certification
requests which contain a) identity public key b) a proof of possession
(PoP) of identity private key (signature on challenge), c) the
hardware manufacturers endorsement certificate containing the TPM's
endorsement public key.  The privacy CA checks whether the endorsement
certificate is signed by a hardware manufacturer it trusts.  The
privacy CA sends in response an identity certificate encrypted with
the TPM's endorsement public key.  The TPM decrypts the encrypted
identity certifate with the endorsement private key.

- remote attestation - The owner uses the identity keys in the remote
attestation functions.  Note that the identity private keys are also
generated on the TPM, the private key also never leaves the TPM.  The
identity private key is certified by the privacy CA as having been
requested by a certified endorsement key.


The last two paragraphs imply something else interesting: the privacy
CA can collude with anyone to create a virtualized environment.  (This
is because the TPM endorsement key is never directly used in remote
attestation for privacy reasons.)  All that is required to virtualize
a TPM is an attestation from the privacy CA in creating an identity
certificate.

So there are in fact three avenues for FBI et al to go about obtaining
covert access to the closed space formed by TCPA applications: 

(A) get one of the hardware manufacturers to sign an endorsement key
generated outside a TPM (or get the endorsement CA's private key), or

(B) get a widely used and accepted privacy CA to overlook it's policy
of demanding a hardware manufacturer CA endorsed endorsement public
key and sign an identity public key created outside of a TPM (or get
the privacy CA's private key).

(C) create their own privacy CA and persuade an internet server they
wish to investigate the users of to accept it.  Create themselves a
virtualized client using their own privacy CA, look inside.


I think to combat problem C) as a user of a service you'd want the
remote attestation of software state to auditably include it's
accepted privacy CA database to see if there are any strange Privacy
CAs on there.

I think you could set up and use your own privacy CA, but you can be
sure the RIAA/MPAA will never trust your CA.  A bit like self-signing
SSL site keys.  If you and your friends add your CA to their trusted
root CA database it'll work.  In this case however people have to
trust your home-brew privacy CA not to issue identity certificates
without having seen a valid hardware-endorsement key if they care
about preventing virtualization for the privacy or security of some
network application.

Also, they seem to take explicit steps to prevent you getting multiple
privacy CA certificates on the same identity key.  (I'm not sure why.)
It seems like a bad thing as it forces you to trust just one CA, it
prevents web of trust which

Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Anonymous

[Repost]

Joe Ashwood writes:

 Actually that does nothing to stop it. Because of the construction of TCPA,
 the private keys are registered _after_ the owner receives the computer,
 this is the window of opportunity against that as well.

Actually, this is not true for the endoresement key, PUBEK/PRIVEK, which
is the main TPM key, the one which gets certified by the TPM Entity.
That key is generated only once on a TPM, before ownership, and must
exist before anyone can take ownership.  For reference, see section 9.2,
The first call to TPM_CreateEndorsementKeyPair generates the endorsement
key pair. After a successful completion of TPM_CreateEndorsementKeyPair
all subsequent calls return TCPA_FAIL.  Also section 9.2.1 shows that
no ownership proof is necessary for this step, which is because there is
no owner at that time.  Then look at section 5.11.1, on taking ownership:
user must encrypt the values using the PUBEK.  So the PUBEK must exist
before anyone can take ownership.

 The worst case for
 cost of this is to purchase an additional motherboard (IIRC Fry's has them
 as low as $50), giving the ability to present a purchase. The
 virtual-private key is then created, and registered using the credentials
 borrowed from the second motherboard. Since TCPA doesn't allow for direct
 remote queries against the hardware, the virtual system will actually have
 first shot at the incoming data. That's the worst case.

I don't quite follow what you are proposing here, but by the time you
purchase a board with a TPM chip on it, it will have already generated
its PUBEK and had it certified.  So you should not be able to transfer
a credential of this type from one board to another one.

 The expected case;
 you pay a small registration fee claiming that you accidentally wiped your
 TCPA. The best case, you claim you accidentally wiped your TCPA, they
 charge you nothing to remove the record of your old TCPA, and replace it
 with your new (virtualized) TCPA. So at worst this will cost $50. Once
 you've got a virtual setup, that virtual setup (with all its associated
 purchased rights) can be replicated across an unlimited number of computers.
 
 The important part for this, is that TCPA has no key until it has an owner,
 and the owner can wipe the TCPA at any time. From what I can tell this was
 designed for resale of components, but is perfectly suitable as a point of
 attack.

Actually I don't see a function that will let the owner wipe the PUBEK.
He can wipe the rest of the TPM but that field appears to be set once,
retained forever.

For example, section 8.10: Clear is the process of returning the TPM to
factory defaults.  But a couple of paragraphs later: All TPM volatile
and non-volatile data is set to default value except the endorsement
key pair.

So I don't think your fraud will work.  Users will not wipe their
endorsement keys, accidentally or otherwise.  If a chip is badly enough
damaged that the PUBEK is lost, you will need a hardware replacement,
as I read the spec.

Keep in mind that I only started learning this stuff a few weeks ago,
so I am not an expert, but this is how it looks to me.




TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Adam Back

[resend via different node: [EMAIL PROTECTED] seems to be dead --
primary MX refusing connections]

Phew... the document is certainly tortuous, and has a large number of
similarly and confusingly named credentials, certificates and keys,
however from what I can tell this is what is going on:

Summary: I think the endorsement key and it's hardware manufacturers
certificate is generated at manufacture and is not allowed to be
changed.  Changing ownership only means (typically) deleting old
identities and creating new ones.

The longer version...

- endorsement key generation and certification - There is one
endorsement key per TPM which is created and certified during
manufacture.  The creation and certification process is 1) create
endorsement key pair, 2) export public key endorsement key, 3)
hardware manufacturer signs endorsement public key to create an
endorsement certificate (to certify that that endorsement public key
belongs to this TPM), 4) the certificate is stored in the TPM (for
later use in communications with the privacy CA.)

- ownership - Then there is the concept of ownership.  The spec says
the TPM MUST ship with no Owner installed.  The owner when he wishes
to claim ownership choose a authentication token which is sent into
the TPM encrypted with the endorsement key.  (They give the example of
the authentication token being the hash of a password).  Physical
presence tests apply to claiming ownership (eg think BIOS POST with no
networking enabled, or physical pin on motherboard like BIOS flash
enable).  The authentication token and ownership can be changed.  The
TPM can be reset back to a state with no current owner.  BUT _at no
point_ does the TPM endorsement private key leave the TPM.  The
TPM_CreateEndorsementKeyPair function is allowed to be called once
(during manufacture) and is thereafter disabled.

- identity keys - Then there is the concept of identity keys.  The
current owner can create and delete identities, which can be anonymous
or pseudonymous.  Presumably the owner would delete all identity keys
before giving the TPM to a new owner.  The identity public key is
certified by the privacy CA.

- privacy ca - The privacy CA accepts identity key certification
requests which contain a) identity public key b) a proof of possession
(PoP) of identity private key (signature on challenge), c) the
hardware manufacturers endorsement certificate containing the TPM's
endorsement public key.  The privacy CA checks whether the endorsement
certificate is signed by a hardware manufacturer it trusts.  The
privacy CA sends in response an identity certificate encrypted with
the TPM's endorsement public key.  The TPM decrypts the encrypted
identity certifate with the endorsement private key.

- remote attestation - The owner uses the identity keys in the remote
attestation functions.  Note that the identity private keys are also
generated on the TPM, the private key also never leaves the TPM.  The
identity private key is certified by the privacy CA as having been
requested by a certified endorsement key.


The last two paragraphs imply something else interesting: the privacy
CA can collude with anyone to create a virtualized environment.  (This
is because the TPM endorsement key is never directly used in remote
attestation for privacy reasons.)  All that is required to virtualize
a TPM is an attestation from the privacy CA in creating an identity
certificate.

So there are in fact three avenues for FBI et al to go about obtaining
covert access to the closed space formed by TCPA applications: 

(A) get one of the hardware manufacturers to sign an endorsement key
generated outside a TPM (or get the endorsement CA's private key), or

(B) get a widely used and accepted privacy CA to overlook it's policy
of demanding a hardware manufacturer CA endorsed endorsement public
key and sign an identity public key created outside of a TPM (or get
the privacy CA's private key).

(C) create their own privacy CA and persuade an internet server they
wish to investigate the users of to accept it.  Create themselves a
virtualized client using their own privacy CA, look inside.


I think to combat problem C) as a user of a service you'd want the
remote attestation of software state to auditably include it's
accepted privacy CA database to see if there are any strange Privacy
CAs on there.

I think you could set up and use your own privacy CA, but you can be
sure the RIAA/MPAA will never trust your CA.  A bit like self-signing
SSL site keys.  If you and your friends add your CA to their trusted
root CA database it'll work.  In this case however people have to
trust your home-brew privacy CA not to issue identity certificates
without having seen a valid hardware-endorsement key if they care
about preventing virtualization for the privacy or security of some
network application.

Also, they seem to take explicit steps to prevent you getting multiple
privacy CA certificates on the same identity key.  (I'm not sure why

Re: Overcoming the potential downside of TCPA

2002-08-15 Thread AARG! Anonymous

Joe Ashwood writes:

 Actually that does nothing to stop it. Because of the construction of TCPA,
 the private keys are registered _after_ the owner receives the computer,
 this is the window of opportunity against that as well.

Actually, this is not true for the endoresement key, PUBEK/PRIVEK, which
is the main TPM key, the one which gets certified by the TPM Entity.
That key is generated only once on a TPM, before ownership, and must
exist before anyone can take ownership.  For reference, see section 9.2,
The first call to TPM_CreateEndorsementKeyPair generates the endorsement
key pair. After a successful completion of TPM_CreateEndorsementKeyPair
all subsequent calls return TCPA_FAIL.  Also section 9.2.1 shows that
no ownership proof is necessary for this step, which is because there is
no owner at that time.  Then look at section 5.11.1, on taking ownership:
user must encrypt the values using the PUBEK.  So the PUBEK must exist
before anyone can take ownership.

 The worst case for
 cost of this is to purchase an additional motherboard (IIRC Fry's has them
 as low as $50), giving the ability to present a purchase. The
 virtual-private key is then created, and registered using the credentials
 borrowed from the second motherboard. Since TCPA doesn't allow for direct
 remote queries against the hardware, the virtual system will actually have
 first shot at the incoming data. That's the worst case.

I don't quite follow what you are proposing here, but by the time you
purchase a board with a TPM chip on it, it will have already generated
its PUBEK and had it certified.  So you should not be able to transfer
a credential of this type from one board to another one.

 The expected case;
 you pay a small registration fee claiming that you accidentally wiped your
 TCPA. The best case, you claim you accidentally wiped your TCPA, they
 charge you nothing to remove the record of your old TCPA, and replace it
 with your new (virtualized) TCPA. So at worst this will cost $50. Once
 you've got a virtual setup, that virtual setup (with all its associated
 purchased rights) can be replicated across an unlimited number of computers.
 
 The important part for this, is that TCPA has no key until it has an owner,
 and the owner can wipe the TCPA at any time. From what I can tell this was
 designed for resale of components, but is perfectly suitable as a point of
 attack.

Actually I don't see a function that will let the owner wipe the PUBEK.
He can wipe the rest of the TPM but that field appears to be set once,
retained forever.

For example, section 8.10: Clear is the process of returning the TPM to
factory defaults.  But a couple of paragraphs later: All TPM volatile
and non-volatile data is set to default value except the endorsement
key pair.

So I don't think your fraud will work.  Users will not wipe their
endorsement keys, accidentally or otherwise.  If a chip is badly enough
damaged that the PUBEK is lost, you will need a hardware replacement,
as I read the spec.

Keep in mind that I only started learning this stuff a few weeks ago,
so I am not an expert, but this is how it looks to me.




Re: TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Mike Rosing

On Thu, 15 Aug 2002, Adam Back wrote:

 Summary: I think the endorsement key and it's hardware manufacturers
 certificate is generated at manufacture and is not allowed to be
 changed.  Changing ownership only means (typically) deleting old
 identities and creating new ones.

Are there 2 certificates?  One from the manufacturer and one from
the privacy CA?

 - endorsement key generation and certification - There is one
 endorsement key per TPM which is created and certified during
 manufacture.  The creation and certification process is 1) create
 endorsement key pair, 2) export public key endorsement key, 3)
 hardware manufacturer signs endorsement public key to create an
 endorsement certificate (to certify that that endorsement public key
 belongs to this TPM), 4) the certificate is stored in the TPM (for
 later use in communications with the privacy CA.)

So finding the manufacturers signature key breaks the whole system
right?  Once you have that key you can create as many fake TPM's
as you want.

 TPM can be reset back to a state with no current owner.  BUT _at no
 point_ does the TPM endorsement private key leave the TPM.  The
 TPM_CreateEndorsementKeyPair function is allowed to be called once
 (during manufacture) and is thereafter disabled.

But it's easier to manufacture it by burning fuse links so it
can't be read back - ala OTP.  so the manufacturer could have a
list of every private key (just because they aren't supposed to
doesn't prevent it.)  It still meets the spec - the key never leaves
the chip.

 - identity keys - Then there is the concept of identity keys.  The
 current owner can create and delete identities, which can be anonymous
 or pseudonymous.  Presumably the owner would delete all identity keys
 before giving the TPM to a new owner.  The identity public key is
 certified by the privacy CA.

 - privacy ca - The privacy CA accepts identity key certification
 requests which contain a) identity public key b) a proof of possession
 (PoP) of identity private key (signature on challenge), c) the
 hardware manufacturers endorsement certificate containing the TPM's
 endorsement public key.  The privacy CA checks whether the endorsement
 certificate is signed by a hardware manufacturer it trusts.  The
 privacy CA sends in response an identity certificate encrypted with
 the TPM's endorsement public key.  The TPM decrypts the encrypted
 identity certifate with the endorsement private key.

How does the CA check the endorsement certificate?  If it's by
checking the signature, then finding the manufacturer's private
key is very worthwhile - the entire TCPA for 100's of millions
of computers gets compromised.  If it's by matching with the
manufacturer's list then anonymity is impossible.

Thanks for the analysis Adam.  It seems like there are a couple of
obvious points to attack this system at.  I would think it's easy
to break for a large enough government.

Patience, persistence, truth,
Dr. mike




Re: TCPA not virtualizable during ownership change

2002-08-15 Thread AARG! Anonymous

Basically I agree with Adam's analysis.  At this point I think he
understands the spec equally as well as I do.  He has a good point
about the Privacy CA key being another security weakness that could
break the whole system.  It would be good to consider how exactly that
problem could be eliminated using more sophisticated crypto.  Keep in
mind that there is a need to be able to revoke Endorsement Certificates
if it is somehow discovered that a TPM has been cracked or is bogus.
I'm not sure that would be possible with straight Chaum blinding or
Brands credentials.  I would perhaps look at Group Signature schemes;
there is one with efficient revocation being presented at Crypto 02.
These involve a TTP but he can't forge credentials, just link identity
keys to endorsement keys (in TCPA terms).  Any system which allows for
revocation must have such linkability, right?

As for Joe Ashwood's analysis, I think he is getting confused between the
endorsement key, endorsement certificate, and endorsement credentials.
The first is the key pair created on the TPM.  The terms PUBEK and PRIVEK
are used to refer to the public and private parts of the endorsement
key.  The endorsement certificate is an X.509 certificate issued on the
endorsement key by the manufacturer.  The manufacturer is also called
the TPM Entity or TPME.  The endorsement credential is the same as the
endorsement certificate, but considered as an abstract data structure
rather than as a specific embodiment.

The PRIVEK never leaves the chip.  The PUBEK does, but it is considered
sensitive because it is a de facto unique identifier for the system,
like the Intel processor serial number which caused such controversy
a few years ago.  The endorsement certificate holds the PUBEK value
(in the SubjectPublicKeyInfo field) and so is equally a de facto unique
identifier, hence it is also not too widely shown.




Re: TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Adam Back

I think a number of the apparent conflicts go away if you carefully
track endorsement key pair vs endorsement certificate (signature on
endorsement key by hw manufacturer).  For example where it is said
that the endorsement _certificate_ could be inserted after ownership
has been established (not the endorsement key), so that apparent
conflict goes away.  (I originally thought this particular one was a
conflict also, until I noticed that.)  I see anonymous found the same
thing.

But anyway this extract from the CC PP makes clear the intention and
an ST based on this PP is what a given TPM will be evaluated based on:

http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-PP-TPM1_9_4.pdf

p 20:
| The TSF shall restrict the ability to initialize or modify the TSF 
| data: Endorsement Key Pair [...] to the TPM manufacturer or designee.

(if only they could have managed to say that in the spec).

Adam
--
http://www.cypherspace.org/adam/




Re: TCPA not virtualizable during ownership change

2002-08-15 Thread James A. Donald

--
On 15 Aug 2002 at 15:26, AARG! Anonymous wrote:
 Basically I agree with Adam's analysis.  At this point I 
 think he understands the spec equally as well as I do.  He 
 has a good point about the Privacy CA key being another 
 security weakness that could break the whole system.  It 
 would be good to consider how exactly that problem could be 
 eliminated using more sophisticated crypto.

Lucky claims to have pointed this out two years ago, proposed 
more sophisticated crypto, and received a hostile reception.

Which leads me to suspect that the capability of the powerful 
to break the system is a designed in feature.  

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 JjoH8U8qZ1eOdT/yGjfV7Xz9andBZPeYWaOLC+NP
 2/OJG2MZSnAqcyuvUsNZTsQAcffGGST6LJ7e9vFbK




Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Jay Sulzberger

On Thu, 15 Aug 2002, Anonymous wrote:

 [Repost]

 Joe Ashwood writes:

  Actually that does nothing to stop it. Because of the construction of TCPA,
  the private keys are registered _after_ the owner receives the computer,
  this is the window of opportunity against that as well.

 Actually, this is not true for the endoresement key, PUBEK/PRIVEK, which
 is the main TPM key, the one which gets certified by the TPM Entity.
 That key is generated only once on a TPM, before ownership, and must
 exist before anyone can take ownership.  For reference, see section 9.2,
 The first call to TPM_CreateEndorsementKeyPair generates the endorsement
 key pair. After a successful completion of TPM_CreateEndorsementKeyPair
 all subsequent calls return TCPA_FAIL.  Also section 9.2.1 shows that
 no ownership proof is necessary for this step, which is because there is
 no owner at that time.  Then look at section 5.11.1, on taking ownership:
 user must encrypt the values using the PUBEK.  So the PUBEK must exist
 before anyone can take ownership.

  The worst case for
  cost of this is to purchase an additional motherboard (IIRC Fry's has them
  as low as $50), giving the ability to present a purchase. The
  virtual-private key is then created, and registered using the credentials
  borrowed from the second motherboard. Since TCPA doesn't allow for direct
  remote queries against the hardware, the virtual system will actually have
  first shot at the incoming data. That's the worst case.

 I don't quite follow what you are proposing here, but by the time you
 purchase a board with a TPM chip on it, it will have already generated
 its PUBEK and had it certified.  So you should not be able to transfer
 a credential of this type from one board to another one.

 ... /

But I think you claimed No root key..  Is this not a root key?

oo--JS.




Overcoming the potential downside of TCPA

2002-08-14 Thread Joseph Ashwood

Lately on both of these lists there has been quite some discussion about
TCPA and Palladium, the good, the bad, the ugly, and the anonymous. :)
However there is something that is very much worth noting, at least about
TCPA.

There is nothing stopping a virtualized version being created.

There is nothing that stops say VMWare from synthesizing a system view that
includes a virtual TCPA component. This makes it possible to (if desired)
remove all cryptographic protection.

Of course such a software would need to be sold as a development tool but
we all know what would happen. Tools like VMWare have been developed by
others, and as I recall didn't take all that long to do. As such they can be
anonymously distributed, and can almost certainly be stored entirely on a
boot CD, using the floppy drive to store the keys (although floppy drives
are no longer a cool thing to have in a system), boot from the CD, it runs
a small kernel that virtualizes and allows debugging of the TPM/TSS which
allows the viewing, copying and replacement of private keys on demand.

Of course this is likely to quickly become illegal, or may already, but that
doesn't stop the possibility of creating such a system. For details on how
to create this virtualized TCPA please refer to the TCPA spec.
Joe




Re: Overcoming the potential downside of TCPA

2002-08-14 Thread Carl Ellison

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

At 10:58 PM 8/13/2002 -0700, Joseph Ashwood wrote:
Lately on both of these lists there has been quite some discussion
about TCPA and Palladium, the good, the bad, the ugly, and the
anonymous. :) However there is something that is very much worth
noting, at least about TCPA.

There is nothing stopping a virtualized version being created.

The only thing to stop that is the certificate on the TCPA's built-in
key.  You would have to shave one TCPA chip and use its key in the
virtualized version.  If you distributed that shaved key publicly or
just to too many people, then its compromise would likely be detected
and its power to attest to S/W configuration would be revoked.

However, if you kept the key yourself and used it only at the same
frequency you normally would (for the normal set of actions), then
the compromise could not be detected and you should be able to run
virtualized very happily.

That's one of the main problems with TCPA, IMHO, as a security
mechanism: that its security depends on hardware tamper resistance --
but at the same time, the TPM needs to be a cheap part, so it can't
be very tamper resistant.

 - Carl

-BEGIN PGP SIGNATURE-
Version: PGP 6.5.8

iQA/AwUBPVpb2XPxfjyW5ytxEQIaAgCgh72smP3W6qclzgRbNiWt5prdpk4AmwWw
aKNdDfQbHWxRVJ3yQ02FxtJb
=eEI+
-END PGP SIGNATURE-


+--+
|Carl M. Ellison [EMAIL PROTECTED] http://world.std.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+




Re: Overcoming the potential downside of TCPA

2002-08-14 Thread Ben Laurie

Joseph Ashwood wrote:
 Lately on both of these lists there has been quite some discussion about
 TCPA and Palladium, the good, the bad, the ugly, and the anonymous. :)
 However there is something that is very much worth noting, at least about
 TCPA.
 
 There is nothing stopping a virtualized version being created.
 
 There is nothing that stops say VMWare from synthesizing a system view that
 includes a virtual TCPA component. This makes it possible to (if desired)
 remove all cryptographic protection.
 
 Of course such a software would need to be sold as a development tool but
 we all know what would happen. Tools like VMWare have been developed by
 others, and as I recall didn't take all that long to do. As such they can be
 anonymously distributed, and can almost certainly be stored entirely on a
 boot CD, using the floppy drive to store the keys (although floppy drives
 are no longer a cool thing to have in a system), boot from the CD, it runs
 a small kernel that virtualizes and allows debugging of the TPM/TSS which
 allows the viewing, copying and replacement of private keys on demand.
 
 Of course this is likely to quickly become illegal, or may already, but that
 doesn't stop the possibility of creating such a system. For details on how
 to create this virtualized TCPA please refer to the TCPA spec.

What prevents this from being useful is the lack of an appropriate 
certificate for the private key in the TPM.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




TCPA/Palladium user interst vs third party interest (Re: responding to claims about TCPA)

2002-08-14 Thread Adam Back

The remote attesation is the feature which is in the interests of
third parties.

I think if this feature were removed the worst of the issues the
complaints are around would go away because the remaining features
would be under the control of the user, and there would be no way for
third parties to discriminate against users who did not use them, or
configured them in given ways.

The remaining features of note being sealing, and integrity metric
based security boot-strapping.

However the remote attestation is clearly the feature that TCPA, and
Microsoft place most value on (it being the main feature allowing DRM,
and allowing remote influence and control to be exerted on users
configuration and software choices).

So the remote attesation feature is useful for _servers_ that want to
convince clients of their trust-worthiness (that they won't look at
content, tamper with the algorithm, or anonymity or privacy properties
etc).  So you could imagine that feature being a part of server
machines, but not part of client machines -- there already exists some
distinctions between client and server platforms -- for example high
end Intel chips with larger cache etc intended for server market by
their pricing.  You could imagine the TCPA/Palladium support being
available at extra cost for this market.

But the remaining problem is that the remote attesation is kind of
dual-use (of utility to both user desktop machines and servers).  This
is because with peer-to-peer applications, user desktop machines are
also servers.

So the issue has become entangled.

It would be useful for individual liberties for remote-attestation
features to be widely deployed on desktop class machines to build
peer-to-peer systems and anonymity and privacy enhancing systems.

However the remote-attestation feature is also against the users
interests because it's wide-spread deployment is the main DRM enabling
feature and general tool for remote control descrimination against
user software and configuration choices.

I don't see any way to have the benefits without the negatives, unless
anyone has any bright ideas.  The remaining questions are:

- do the negatives out-weigh the positives (lose ability to
reverse-engineer and virtualize applications, and trade
software-hacking based BORA for hardware-hacking based ROCA);

- are there ways to make remote-attestation not useful for DRM,
eg. limited deployment, other;

- would the user-positive aspects of remote-attestation still be
largely available with only limited-deployment -- eg could interesting
peer-to-peer and privacy systems be built with a mixture of
remote-attestation able and non-remote-attestation able nodes.

Adam
--
http://www.cypherspace.org/adam/

On Sat, Aug 10, 2002 at 04:02:36AM -0700, John Gilmore wrote:
 One of the things I told them years ago was that they should draw
 clean lines between things that are designed to protect YOU, the
 computer owner, from third parties; versus things that are designed to
 protect THIRD PARTIES from you, the computer owner.  This is so
 consumers can accept the first category and reject the second, which,
 if well-informed, they will do.  If it's all a mishmash, then
 consumers will have to reject all of it, and Intel can't even improve
 the security of their machines FOR THE OWNER, because of their history
 of security projects that work against the buyer's interest, such as
 the Pentium serial number and HDCP.
 [...]




Re: TCPA/Palladium user interst vs third party interest (Re: responding to claims about TCPA)

2002-08-14 Thread Ben Laurie

Adam Back wrote:
 The remote attesation is the feature which is in the interests of
 third parties.
 
 I think if this feature were removed the worst of the issues the
 complaints are around would go away because the remaining features
 would be under the control of the user, and there would be no way for
 third parties to discriminate against users who did not use them, or
 configured them in given ways.
 
 The remaining features of note being sealing, and integrity metric
 based security boot-strapping.
 
 However the remote attestation is clearly the feature that TCPA, and
 Microsoft place most value on (it being the main feature allowing DRM,
 and allowing remote influence and control to be exerted on users
 configuration and software choices).
 
 So the remote attesation feature is useful for _servers_ that want to
 convince clients of their trust-worthiness (that they won't look at
 content, tamper with the algorithm, or anonymity or privacy properties
 etc).  So you could imagine that feature being a part of server
 machines, but not part of client machines -- there already exists some
 distinctions between client and server platforms -- for example high
 end Intel chips with larger cache etc intended for server market by
 their pricing.  You could imagine the TCPA/Palladium support being
 available at extra cost for this market.
 
 But the remaining problem is that the remote attesation is kind of
 dual-use (of utility to both user desktop machines and servers).  This
 is because with peer-to-peer applications, user desktop machines are
 also servers.
 
 So the issue has become entangled.
 
 It would be useful for individual liberties for remote-attestation
 features to be widely deployed on desktop class machines to build
 peer-to-peer systems and anonymity and privacy enhancing systems.
 
 However the remote-attestation feature is also against the users
 interests because it's wide-spread deployment is the main DRM enabling
 feature and general tool for remote control descrimination against
 user software and configuration choices.
 
 I don't see any way to have the benefits without the negatives, unless
 anyone has any bright ideas.  The remaining questions are:
 
 - do the negatives out-weigh the positives (lose ability to
 reverse-engineer and virtualize applications, and trade
 software-hacking based BORA for hardware-hacking based ROCA);
 
 - are there ways to make remote-attestation not useful for DRM,
 eg. limited deployment, other;
 
 - would the user-positive aspects of remote-attestation still be
 largely available with only limited-deployment -- eg could interesting
 peer-to-peer and privacy systems be built with a mixture of
 remote-attestation able and non-remote-attestation able nodes.

A wild thought that occurs to me is that some mileage could be had by 
using remotely attested servers to verify _signatures_ of untrusted 
peer-to-peer stuff. So, you get most of the benefits of peer-to-peer and 
the servers only have to do cheap, low-bandwidth stuff.

I admit I haven't worked out any details of this at all!

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: Is TCPA broken?

2002-08-13 Thread Joseph Ashwood

I need to correct myself.
- Original Message -
From: Joseph Ashwood [EMAIL PROTECTED]

 Suspiciously absent though is the requirement for symmetric encryption
(page
 4 is easiest to see this). This presents a potential security issue, and
 certainly a barrier to its use for non-authentication/authorization
 purposes. This is by far the biggest potential weak point of the system.
No
 server designed to handle the quantity of connections necessary to do this
 will have the ability to decrypt/sign/encrypt/verify enough data for the
 purely theoretical universal DRM application.

I need to correct this DES, and 3DES are requirements, AES is optional. This
functionality appears to be in the TSS. However I can find very few
references to the usage, and all of those seem to be thoroughly wrapped in
numerous layers of SHOULD and MAY. Since is solely the realm of the TSS
(which had it's command removed July 12, 2001 making this certainly
incomplete), it is only accessible through few commands (I won't bother with
VerifySignature). However looking at the TSS_Bind it says explicitly on page
157 To bind data that is larger than the RSA public key modulus it is the
responsibility of the caller to perform the blocking indicating that the
expected implementation is RSA only. The alternative is wrapping the key,
but that is clearly targeted at using RSA to encrypt a key. The Identity
commands, this appears to use a symmetric key, but deals strictly with
TPM_IDENTITY_CREDENTIAL. Regardless the TSS is a software entity (although
it may be assisted by hardware), this is and of itself presents some
interesting side-effects on security.
Joe




TCPA and Open Source

2002-08-13 Thread AARG! Anonymous

One of the many charges which has been tossed at TCPA is that it will
harm free software.  Here is what Ross Anderson writes in the TCPA FAQ
at http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html (question 18):

 TCPA will undermine the General Public License (GPL), under which
 many free and open source software products are distributed

 At least two companies have started work on a TCPA-enhanced version of
 GNU/linux. This will involve tidying up the code and removing a number
 of features. To get a certificate from the TCPA corsortium, the sponsor
 will then have to submit the pruned code to an evaluation lab, together
 with a mass of documentation showing why various known attacks on the code
 don't work.

First we have to deal with this certificate business.  Most readers
probably assume that you need this cert to use the TCPA system, and
even that you would not be able to boot into this Linux OS without such
a cert.  This is part of the longstanding claim that TCPA will only boot
signed code.

I have refuted this claim many times, and asked for those who disagree to
point to where in the spec it says this, without anyone doing so.  I can
only hope that interested readers may be beginning to believe my claim
since if it were false, somebody would have pointed to chapter and verse
in the TCPA spec just to shut me up about it if for no better reason.

However, Ross is actually right that TCPA does support a concept for
a certificate that signs code.  It's called a Validation Certificate.
The system can hold a number of these VC's, which represent the presumed
correct results of the measurement (hashing) process on various software
and hardware components.  In the case of OS code, then, there could be
VC's representing specific OS's which could boot.

The point is that while this is a form of signed code, it's not something
which gives the TPM control over what OS can boot.  Instead, the VCs
are used to report to third party challengers (on remote systems) what
the system configuration of this system is supposed to be, along with
what it actually is.  It's up to the remote challenger to decide if he
trusts the issuer of the VC, and if so, he will want to see that the
actual measurement (i.e. the hash of the OS) matches the value in the VC.

So what Ross says above could potentially be true, if and when TCPA
compliant operating systems begin to be developed.  Assuming that there
will be some consortium which will issue VC's for operating systems,
and assuming that third parties will typically trust that consortium and
only that one, then you will need to get a VC from that group in order
to effectively participate in the TCPA network.

This doesn't mean that your PC won't boot the OS without such a cert; it
just means that if most people choose to trust the cert issuer, then you
will need to get a cert from them to get other people to trust your OS.
It's much like the power Verisign has today with X.509; most people's
software trusts certs from Verisign, so in practice you pretty much need
to get a cert from them to participate in the X.509 PKI.

So does this mean that Ross is right, that free software is doomed under
TCPA?  No, for several reasons, not least being a big mistake he makes:

 (The evaluation is at level E3 - expensive enough to keep out
 the free software community, yet lax enough for most commercial software
 vendors to have a chance to get their lousy code through.) Although the
 modified program will be covered by the GPL, and the source code will
 be free to everyone, it will not make full use of the TCPA features
 unless you have a certificate for it that is specific to the Fritz chip
 on your own machine. That is what will cost you money (if not at first,
 then eventually).

The big mistake is the belief that the cert is specific to the Fritz
chip (Ross's cute name for the TPM).

Actually the VC data structure is not specific to any one PC.  It is
intentionally designed not to have any identifying information in it
that will represent a particular system.  This is because the VC cert
has to be shown to remote third parties in order to get them to trust the
local system, and TCPA tries very hard to protect user privacy (believe
it or not!).  If the VC had computer-identifying information in it, then
it would be a linkable identifier for all TCPA interactions on the net,
which would defeat all of the work TCPA does with Privacy CAs and whatnot
to try to protect user privacy.  If you understand this, you will see
that the whole TCPA concept requires VC's not to be machine specific.

People always complain when I point to the spec, as if the use of facts
were somehow unfair in this dispute.  But if you are willing, you can look
at section 9.5.4 of http://www.trustedcomputing.org/docs/main%20v1_1b.pdf,
which is the data structure for the validation certificate.  It is an
X.509 attribute certificate, which is a type of cert that would normally
be expected to point back at the machine

Is TCPA broken?

2002-08-13 Thread Joseph Ashwood

- Original Message -
From: Mike Rosing [EMAIL PROTECTED]
 Are you now admitting TCPA is broken?

I freely admit that I haven't made it completely through the TCPA
specification. However it seems to be, at least in effect although not
exactly, a motherboard bound smartcard.

Because it is bound to the motherboard (instead of the user) it can be used
for various things, but at the heart it is a smartcard. Also because it
supports the storage and use of a number of private RSA keys (no other type
supported) it provides some interesting possibilities.

Because of this I believe that there is a core that is fundamentally not
broken. It is the extensions to this concept that pose potential breakage.
In fact looking at Page 151 of the TCPA 1.1b spec it clearly states (typos
are mine) the OS can be attacked by a second OS replacing both the
SEALED-block encryption key, and the user database itself. There are
measures taken to make such an attack cryptographically hard, but it
requires the OS to actually do something.

Suspiciously absent though is the requirement for symmetric encryption (page
4 is easiest to see this). This presents a potential security issue, and
certainly a barrier to its use for non-authentication/authorization
purposes. This is by far the biggest potential weak point of the system. No
server designed to handle the quantity of connections necessary to do this
will have the ability to decrypt/sign/encrypt/verify enough data for the
purely theoretical universal DRM application.

The second substantial concern is that the hardware is substantially limited
in the size of the private keys, being limited to 2048 bits, the second
concern is that it is additionally bound to SHA-1. Currently these are both
sufficient for security, but in the last year we have seen realistic claims
that 1500 bit RSA may be subject to viable attack (or alternately may not
depending on who you believe). While attacks on RSA tend to be spread a fair
distance apart, this never the less puts 2048 bit RSA at fairly close to the
limit of security, it would be much preferable to support 4096-bit RSA from
a security standpoint. SHA-1 is also currently near its limit. SHA-1 offer
2^80 security, a value that it can be argued may be too small for long term
security.

For the time being TCPA seems to be unbroken, 2048-bit RSA is sufficient,
and SHA-1 is used as a MAC for important points. For the future though I
believe these choices may prove to be a weak point in the system, for those
that would like to attack the system, these are the prime targets. The
secondary targets would be forcing debugging to go unaddressed by the OS,
which since there is no provision for smartcard execution (except in
extremely small quantities just as in a smartcard) would reveal very nearly
everything (including the data desired).
Joe




Re: TCPA and Open Source

2002-08-13 Thread James A. Donald

--
On 13 Aug 2002 at 0:05, AARG! Anonymous wrote:
 The point is that while this is a form of signed code, it's not 
 something which gives the TPM control over what OS can boot. 
 Instead, the VCs are used to report to third party challengers 
 (on remote systems) what the system configuration of this system 
 is supposed to be, along with what it actually is.

It does however, enable the state to control what OS one can boot 
if one wishes to access the internet.

It does not seem to me that the TPM is likely to give hollywood 
what it wants, unless it is backed by such state enforcement.

Furthermore, since the TPM gets first whack at boot up, a simple
code download to the TPM could change the meaning of the
signature, so that the machine will not boot unless running a
state authorized operating system.

It could well happen that TPM machines become required to go on
the internet, and then later only certain operating systems are
permitted on the internet, and then later the required operating
system upgrades the TPM software so that only authorized operating
systems boot at all.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 H/t91jm8hq5pLR2AdFYi2lRoV9AKYBZ7WqqJmKFe
 2/IFQaW0fl6ec+TL3iMKMxD6Y0ulGDK7RwqTVJlBQ




Re: Seth on TCPA at Defcon/Usenix

2002-08-13 Thread Mike Rosing

On Tue, 13 Aug 2002, James A. Donald wrote:

 To me DRM seems possible to the extent that computers themselves
 are rendered tamper resistant -- that is to say rendered set top
 boxes not computers, to the extent that unauthorized personnel are
 prohibited from accessing general purpose computers.

But even then, if it's perceptable to a human in some form, it
can be copied.  Suppose it's displayed on a screen in english
and copied with a pencil in Japanese, then sent by unicode across
the planet.  I agree it'd be mighty hard to copy pictures from
a set top box at video frame rates by hand, but there are many
musicians who can hear a song once and play it again perfectly.

All it takes is one person who has valid access and they can copy
anything.  It may take a lot of expensive equipment and be hard to
do, but they don't have to crack anything, they can just copy the
human perceptible data onto a machine that doesn't have any DRM
crap.

This is what makes the whole analog hole idea idiotic.  Humans are
analog - they can copy the data!  To plug the analog hole Hollywood
will have to control every human mind directly.

 To me, TCPA only makes sense as a step towards some of the more
 monstrous outcomes that have been suggested by myself and others
 on this list.  It does not make sense as a final destination, but
 only as a first step on a path.

Yeah, it sure seems obvious to me too.  I think preventing that
first step is mighty important.

Patience, persistence, truth,
Dr. mike




Re: Challenge to David Wagner on TCPA

2002-08-13 Thread AARG! Anonymous

Brian LaMacchia writes:

 So the complexity isn't in how the keys get initialized on the SCP (hey, it
 could be some crazy little hobbit named Mel who runs around to every machine
 and puts them in with a magic wand).  The complexity is in the keying
 infrastructure and the set of signed statements (certificates, for lack of a
 better word) that convey information about how the keys were generated 
 stored.  Those statements need to be able to represent to other applications
 what protocols were followed and precautions taken to protect the private
 key.  Assuming that there's something like a cert chain here, the root of
 this chain chould be an OEM, an IHV, a user, a federal agency, your company,
 etc. Whatever that root is, the application that's going to divulge secrets
 to the SCP needs to be convinced that the key can be trusted (in the
 security sense) not to divulge data encrypted to it to third parties.
 Palladium needs to look at the hardware certificates and reliably tell
 (under user control) what they are. Anyone can decide if they trust the
 system based on the information given; Palladium simply guarantees that it
 won't tell anyone your secrets without your explicit request..

This makes a lot of sense, especially for closed systems like business
LANs and WANs where there is a reasonable centralized authority who can
validate the security of the SCP keys.  I suggested some time back that
since most large businesses receive and configure their computers in
the IT department before making them available to employees, that would
be a time that they could issue private certs on the embedded SCP keys.
The employees' computers could then be configured to use these private
certs for their business computing.

However the larger vision of trusted computing leverages the global
internet and turns it into what is potentially a giant distributed
computer.  For this to work, for total strangers on the net to have
trust in the integrity of applications on each others' machines, will
require some kind of centralized trust infrastructure.  It may possibly
be multi-rooted but you will probably not be able to get away from
this requirement.

The main problem, it seems to me, is that validating the integrity of
the SCP keys cannot be done remotely.  You really need physical access
to the SCP to be able to know what key is inside it.  And even that
is not enough, if it is possible that the private key may also exist
outside, perhaps because the SCP was initialized by loading an externally
generated public/private key pair.  You not only need physical access,
you have to be there when the SCP is initialized.

In practice it seems that only the SCP manufacturer, or at best the OEM
who (re) initializes the SCP before installing it on the motherboard,
will be in a position to issue certificates.  No other central authorities
will have physical access to the chips on a near-universal scale at the
time of their creation and installation, which is necessary to allow
them to issue meaningful certs.  At least with the PGP web of trust
people could in principle validate their keys over the phone, and even
then most PGP users never got anyone to sign their keys.  An effective
web of trust seems much more difficult to achieve with Palladium, except
possibly in small groups that already trust each other anyway.

If we do end up with only a few trusted root keys, most internet-scale
trusted computing software is going to have those roots built in.
Those keys will be extremely valuable, potentially even more so than
Verisign's root keys, because trusted computing is actually a far more
powerful technology than the trivial things done today with PKI.  I hope
the Palladium designers give serious thought to the issue of how those
trusted root keys can be protected appropriately.  It's not going to be
enough to say it's not our problem.  For trusted computing to reach
its potential, security has to be engineered into the system from the
beginning - and that security must start at the root!




Re: TCPA and Open Source

2002-08-13 Thread Michael Motyka

James A. Donald [EMAIL PROTECTED] wrote :
--
On 13 Aug 2002 at 0:05, AARG! Anonymous wrote:
 The point is that while this is a form of signed code, it's not 
 something which gives the TPM control over what OS can boot. 
 Instead, the VCs are used to report to third party challengers 
 (on remote systems) what the system configuration of this system 
 is supposed to be, along with what it actually is.

It does however, enable the state to control what OS one can boot 
if one wishes to access the internet.

It does not seem to me that the TPM is likely to give hollywood 
what it wants, unless it is backed by such state enforcement.

Furthermore, since the TPM gets first whack at boot up, a simple
code download to the TPM could change the meaning of the
signature, so that the machine will not boot unless running a
state authorized operating system.

It could well happen that TPM machines become required to go on
the internet, and then later only certain operating systems are
permitted on the internet, and then later the required operating
system upgrades the TPM software so that only authorized operating
systems boot at all.

--digsig
 James A. Donald

Golly gee, I wonder why there was a floater out there about the
administration wanting to update the protocols we all use?

If you can imagine a repressive technological approach to privacy and
communication then you can bet your ass that it has already been thought
of and is on someone's wishlist in DC.

It seems a moot point to even debate whether or not this is the ultimate
intent of the current crop of crap. Fucking duh!

Mike




Re: Challenge to David Wagner on TCPA

2002-08-13 Thread lynn . wheeler

actually it is possible to build chips that generate keys as part of
manufactoring power-on/test (while still in the wafer, and the private key
never, ever exists outside of the chip)  ... and be at effectively the same
trust level as any other part of the chip (i.e. hard instruction ROM).
using such a key pair than can uniquely authenticate a chip 
effectively becomes as much a part of the chip as the ROM or the chip
serial number, etc. The public/private key pair  if appropriately
protected (with evaluated, certified and audited process) then can be
considered somewhat more trusted than a straight serial number aka a
straight serial number can be skimmed and replayed ... where a digital
signature on unique data is harder to replay/spoof.  the hips come with
unique public/private key where the private key is never known.

sometimes this is a difficult consept ... the idea of a public/private key
pair as a form of a difficult to spoof chip serial   when all uses of
public/private key, asymmetric cryptograhy might have always been portrayed
as equilanet to x.509 identity certificates (it is possible to show in
large percentage of the systems that public/private key digital signatures
are sufficient for authentication and any possible certificates are both
redundant and superfulous).

misc. ref (aads chip strawman):
http://www.garlic.com/~lynn/index.html#aads
http://www.asuretee.com/



[EMAIL PROTECTED] on 6/13/2002 11:10 am wrote:

This makes a lot of sense, especially for closed systems like business
LANs and WANs where there is a reasonable centralized authority who can
validate the security of the SCP keys.  I suggested some time back that
since most large businesses receive and configure their computers in the IT
department before making them available to employees, that would be a time
that they could issue private certs on the embedded SCP keys. The
employees' computers could then be configured to use these private certs
for their business computing.

However the larger vision of trusted computing leverages the global
internet and turns it into what is potentially a giant distributed
computer.  For this to work, for total strangers on the net to have trust
in the integrity of applications on each others' machines, will require
some kind of centralized trust infrastructure.  It may possibly be
multi-rooted but you will probably not be able to get away from this
requirement.

The main problem, it seems to me, is that validating the integrity of the
SCP keys cannot be done remotely.  You really need physical access to the
SCP to be able to know what key is inside it.  And even that is not enough,
if it is possible that the private key may also exist outside, perhaps
because the SCP was initialized by loading an externally generated
public/private key pair.  You not only need physical access, you have to be
there when the SCP is initialized.

In practice it seems that only the SCP manufacturer, or at best the OEM who
(re) initializes the SCP before installing it on the motherboard, will be
in a position to issue certificates.  No other central authorities will
have physical access to the chips on a near-universal scale at the time of
their creation and installation, which is necessary to allow them to issue
meaningful certs.  At least with the PGP web of trust people could in
principle validate their keys over the phone, and even then most PGP users
never got anyone to sign their keys.  An effective web of trust seems much
more difficult to achieve with Palladium, except possibly in small groups
that already trust each other anyway.

If we do end up with only a few trusted root keys, most internet-scale
trusted computing software is going to have those roots built in. Those
keys will be extremely valuable, potentially even more so than Verisign's
root keys, because trusted computing is actually a far more powerful
technology than the trivial things done today with PKI.  I hope the
Palladium designers give serious thought to the issue of how those trusted
root keys can be protected appropriately.  It's not going to be enough to
say it's not our problem.  For trusted computing to reach its potential,
security has to be engineered into the system from the beginning - and that
security must start at the root!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]




Re: dangers of TCPA/palladium

2002-08-12 Thread Ben Laurie

David Wagner wrote:
 Ben Laurie  wrote:
 
Mike Rosing wrote:

The purpose of TCPA as spec'ed is to remove my control and
make the platform trusted to one entity.  That entity has the master
key to the TPM.

Now, if the spec says I can install my own key into the TPM, then yes,
it is a very useful tool.

Although the outcome _may_ be like this, your understanding of the TPM 
is seriously flawed - it doesn't prevent your from running whatever you 
want, but what it does do is allow a remote machine to confirm what you 
have chosen to run.

It helps to argue from a correct starting point.
 
 
 I don't understand your objection.  It doesn't look to me like Rosing
 said anything incorrect.  Did I miss something?
 
 It doesn't look like he ever claimed that TCPA directly prevents one from
 running what you want to; rather, he claimed that its purpose (or effect)
 is to reduce his control, to the benefit of others.  His claims appear
 to be accurate, according to the best information I've seen.

The part I'm objecting to is that it makes the platform trusted to one 
entity. In fact, it can be trusted by any number of entities, and you 
(the owner of the machine) get to choose which ones.

Now, it may well be that if this is allowed to proceed unchecked that in 
practice there's only a small number of entities there's any point in 
choosing, but that is a different matter.

Chers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: Challenge to David Wagner on TCPA

2002-08-12 Thread Brian A. LaMacchia

I just want to point out that, as far as Palladium is concerned, we really
don't care how the keys got onto the machine. Certain *applications* written
on top of Palladium will probably care, but all the hardware  the security
kernel really care about is making sure that secrets are only divulged to
the code that had them encrypted in the first place.  It's all a big trust
management problem (or a series of trust management problems) --
applications that are going to rely on SCP keys to protect secrets for them
are going to want some assurances about where the keys live and whether
there's a copy outside the SCP.  I can certainly envision potential
applications that would want guarantees that the key was generated on the
SCP  never left, and I can see other applications that want guarantees that
the key has a copy sitting on another SCP on the other side of the building.

So the complexity isn't in how the keys get initialized on the SCP (hey, it
could be some crazy little hobbit named Mel who runs around to every machine
and puts them in with a magic wand).  The complexity is in the keying
infrastructure and the set of signed statements (certificates, for lack of a
better word) that convey information about how the keys were generated 
stored.  Those statements need to be able to represent to other applications
what protocols were followed and precautions taken to protect the private
key.  Assuming that there's something like a cert chain here, the root of
this chain chould be an OEM, an IHV, a user, a federal agency, your company,
etc. Whatever that root is, the application that's going to divulge secrets
to the SCP needs to be convinced that the key can be trusted (in the
security sense) not to divulge data encrypted to it to third parties.
Palladium needs to look at the hardware certificates and reliably tell
(under user control) what they are. Anyone can decide if they trust the
system based on the information given; Palladium simply guarantees that it
won't tell anyone your secrets without your explicit request..

--bal

P.S. I'm not sure that I actually *want* the ability to extract the private
key from an SCP after it's been loaded, because presumably if I could ask
for the private key then a third party doing a black-bag job on my PC could
also ask for it.  I think what I want is the ability to zeroize the SCP,
remove all state stored within it, and cause new keys to be generated
on-chip.  So long as I can zero the chip whenever I want (or zero part of
it, or whatever) I can eliminate the threat posed by the manufacturer who
initialized the SCP in the first place.

Lucky Green [EMAIL PROTECTED] wrote:
 Ray wrote:

 From: James A. Donald [EMAIL PROTECTED]
 Date: Tue, 30 Jul 2002 20:51:24 -0700

 On 29 Jul 2002 at 15:35, AARG! Anonymous wrote:
 both Palladium and TCPA deny that they are designed to restrict
 what applications you run.  The TPM FAQ at
 http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf reads
 

 They deny that intent, but physically they have that capability.

 To make their denial credible, they could give the owner
 access to the private key of the TPM/SCP.  But somehow I
 don't think that jibes with their agenda.

 Probably not surprisingly to anybody on this list, with the exception
 of potentially Anonymous, according to the TCPA's own TPM Common
 Criteria Protection Profile, the TPM prevents the owner of a TPM from
 exporting the TPM's internal key. The ability of the TPM to keep the
 owner of a PC from reading the private key stored in the TPM has been
 evaluated to E3 (augmented). For the evaluation certificate issued by
 NIST, see:

 http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-VR-TPM.pdf

 If I buy a lock I expect that by demonstrating ownership I
 can get a replacement key or have a locksmith legally open it.

 It appears the days when this was true are waning. At least in the PC
 platform domain.

 --Lucky


 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to
 [EMAIL PROTECTED]




Re: responding to claims about TCPA

2002-08-12 Thread AARG! Anonymous

David Wagner wrote:
 To respond to your remark about bias: No, bringing up Document Revocation
 Lists has nothing to do with bias.  It is only right to seek to understand
 the risks in advance.  I don't understand why you seem to insinuate
 that bringing up the topic of Document Revocation Lists is an indication
 of bias.  I sincerely hope that I misunderstood you.

I believe you did, because if you look at what I actually wrote, I did not
say that bringing up the topic of DRLs is an indication of bias:

 The association of TCPA with SNRLs is a perfect example of the bias and
 sensationalism which has surrounded the critical appraisals of TCPA.
 I fully support John's call for a fair and accurate evaluation of this
 technology by security professionals.  But IMO people like Ross Anderson
 and Lucky Green have disqualified themselves by virtue of their wild and
 inaccurate public claims.  Anyone who says that TCPA has SNRLs is making
 a political statement, not a technical one.

My core claim is the last sentence.  It's one thing to say, as you
are, that TCPA could make applications implement SNRLs more securely.
I believe that is true, and if this statement is presented in the context
of dangers of TCPA or something similar, it would be appropriate.
But even then, for a fair analysis, it should make clear that SNRLs can
be done without TCPA, and it should go into some detail about just how
much more effective a SNRL system would be with TCPA.  (I will write more
about this in responding to Joseph Ashwood.)

And to be truly unbiased, it should also talk about good uses of TCPA.

If you look at Ross Anderson's TCPA FAQ at
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html, he writes (question 4):

: When you boot up your PC, Fritz takes charge. He checks that the boot
: ROM is as expected, executes it, measures the state of the machine;
: then checks the first part of the operating system, loads and executes
: it, checks the state of the machine; and so on. The trust boundary, of
: hardware and software considered to be known and verified, is steadily
: expanded. A table is maintained of the hardware (audio card, video card
: etc) and the software (O/S, drivers, etc); Fritz checks that the hardware
: components are on the TCPA approved list, that the software components
: have been signed, and that none of them has a serial number that has
: been revoked.

He is not saying that TCPA could make SNRLs more effective.  He says
that Fritz checks... that none of [the software components] has a
serial number that has been revoked.  He is flatly stating that the
TPM chip checks a serial number revocation list.  That is both biased
and factually untrue.

Ross's whole FAQ is incredibly biased against TCPA.  I don't see how
anyone can fail to see that.  If it were titled FAQ about Dangers of
TCPA at least people would be warned that they were getting a one-sided
presentation.  But it is positively shameful for a respected security
researcher like Ross Anderson to pretend that this document is giving
an unbiased and fair description.

I would be grateful if someone who disagrees with me, who thinks that
Ross's FAQ is fair and even-handed, would speak up.  It amazes me that
people can see things so differently.

And Lucky's slide presentation, http://www.cypherpunks.to, is if anything
even worse.  I already wrote about this in detail so I won't belabor
the point.  Again, I would be very curious to hear from someone who
thinks that his presentation was unbiased.




Re: dangers of TCPA/palladium

2002-08-12 Thread AARG! Anonymous

Mike Rosing wrote:

 The difference is fundamental: I can change every bit of flash in my BIOS.
 I can not change *anything* in the TPM.  *I* control my BIOS.  IF, and
 only IF, I can control the TPM will I trust it to extend my trust to
 others.  The purpose of TCPA as spec'ed is to remove my control and
 make the platform trusted to one entity.  That entity has the master
 key to the TPM.
 
 Now, if the spec says I can install my own key into the TPM, then yes,
 it is a very useful tool.  It would be fantastic in all the portables
 that have been stolen from the FBI for example.  Assuming they use a
 password at turn on, and the TPM is used to send data over the net,
 then they'd know where all their units are and know they weren't
 compromised (or how badly compromised anyway).
 
 But as spec'ed, it is very seriously flawed.

Ben Laurie replied:

 Although the outcome _may_ be like this, your understanding of the TPM 
 is seriously flawed - it doesn't prevent your from running whatever you 
 want, but what it does do is allow a remote machine to confirm what you 
 have chosen to run.

David Wagner commented:

 I don't understand your objection.  It doesn't look to me like Rosing
 said anything incorrect.  Did I miss something?

 It doesn't look like he ever claimed that TCPA directly prevents one from
 running what you want to; rather, he claimed that its purpose (or effect)
 is to reduce his control, to the benefit of others.  His claims appear
 to be accurate, according to the best information I've seen.

I don't believe that is an accurate paraphrase of what Mike Rosing said.
He said the purpose (not effect) was to remove (not reduce) his control,
and make the platform trusted to one entity (not for the benefit of
others).  Unless you want to defend the notion that the purpose of TCPA
is to *remove* user control of his machine, and make it trusted to only
*one other entity* (rather than a general capability for remote trust),
then I think you should accept that what he said was wrong.

And Mike said more than this.  He said that if he could install his own
key into the TPM that would make it a very useful tool.  This is wrong;
it would completely undermine the trust guarantees of TCPA, make it
impossible for remote observers to draw any useful conclusions about the
state of the system, and render the whole thing useless.  He also talked
about how this could be used to make systems phone home at boot time.
But TCPA has nothing to do with any such functionality as this.

In contrast, Ben Laurie's characterization of TCPA is 100% factual and
accurate.  Do you at least agree with that much, even if you disagree
with my criticism of Mike Rosing's comments?




Re: Seth on TCPA at Defcon/Usenix

2002-08-12 Thread Mike Rosing

On Mon, 12 Aug 2002, AARG! Anonymous wrote:

 It is clear that software hacking is far from almost trivial and you
 can't assume that every software-security feature can and will be broken.

Anyone doing security had better assume software can and will be
broken.  That's where you *start*.

 Furthermore, even when there is a break, it won't be available to
 everyone.  Ordinary people aren't clued in to the hacker community
 and don't download all the latest patches and hacks to disable
 security features in their software.  Likewise for business customers.
 In practice, if Microsoft wanted to implement a global, facist DRL,
 while some people might be able to patch around it, probably 95%+ of
 ordinary users would be stuck with it.

Yes, this the problem with security today.  That's why lots of people
are advocating that the OS should be built from the ground up with
security as the prime goal rather than ad hoc addons as it is now.
Nobody wants to pay for it tho :-)

 In short, while TCPA could increase the effectiveness of global DRLs,
 they wouldn't be *that* much more effective.  Most users will neither
 hack their software nor their hardware, so the hardware doesn't make
 any difference for them.  Hackers will be able to liberate documents
 completely from DRL controls, whether they use hardware or software
 to do it.  The only difference is that there will be fewer hackers,
 if hardware is used, because it is more difficult.  Depending on the
 rate at which important documents go on DRLs, that may not make any
 difference at all.

So what's the point of TCPA if a few hackers can steal the most
expensive data?  Are you now admitting TCPA is broken?  You've got
me very confused now!

I'm actually really confused about the whole DRM business anyway.  It
seems to me that any data available to human perceptions can be
duplicated.  Period.  The idea of DRM (as I understand it) is that you can
hand out data to people you don't trust, and they can't copy it.  To me,
DRM seems fundamentally impossible.

Patience, persistence, truth,
Dr. mike




Re: CDR: Re: Seth on TCPA at Defcon/Usenix

2002-08-12 Thread Jamie Lawrence

On Mon, 12 Aug 2002, AARG! Anonymous wrote:

 His analysis actually applies to a wide range of security features,
 such as the examples given earlier: secure games, improved P2P,
 distributed computing as Adam Back suggested, DRM of course, etc..
 TCPA is a potentially very powerful security enhancement, so it does
 make sense that it can strengthen all of these things, and DRLs as well.
 But I don't see that it is fair to therefore link TCPA specifically with
 DRLs, when there are any number of other security capabilities that are
 also strengthened by TCPA.

Sorry, but now you're just trolling. 

Acid is great for removing all manner of skin problems. It also happens
to cause death, but linking fatalities to it is unfair, considering
that's not what acid was _intended_ to do. 

Creating cheat-proof gaming at the cost of allowing document revoking
enabled software sounds like a bad idea.

-j




Re: responding to claims about TCPA

2002-08-11 Thread Derek Atkins

AARG!Anonymous [EMAIL PROTECTED] writes:

 I don't agree with this distinction.  If I use a smart card chip that
 has a private key on it that won't come off, is that protecting me from
 third parties, or vice versa?  If I run a TCPA-enhanced Gnutella that

Who owns the key?  If you bought the smartcard, you generated the key
yourself on the smartcard, and you control it, then it is probably
benefitting you.  If the smartcard came preprogrammed with a
certificate from the manufacturer, then I would say that it is
protecting the third party from you.

 I wrote earlier that if people were honest, trusted computing would not
 be necessary, because they would keep their promises.  Trusted computing
 allows people to prove to remote users that they will behave honestly.
 How does that fit into your dichotomy?  Society has evolved a myriad

The difference is proving that you are being honest to someone else
vs. an application proving to YOU that it is being honest.  Again, it
is a question of ownership.  There is the DRM side (you proving to
someone else that you are being honest) vs. Virus Protection (an
application proving to _you_ that it is being honest).

-derek

-- 
   Derek Atkins
   Computer and Internet Security Consultant
   [EMAIL PROTECTED] www.ihtfp.com




Re: Challenge to TCPA/Palladium detractors

2002-08-11 Thread Eugen Leitl

On Sat, 10 Aug 2002, R. Hirschfeld wrote:

 A trivial observation: this cannot be true across hardware platforms.

Untrue, just use a VM. Open Boot Forth would do nicely.

 TCPA claims to be platform and OS agnostic, but Palladium does not.

Have fun in that there tarpit.




RE: Challenge to David Wagner on TCPA

2002-08-11 Thread Jim Choate

On Sat, 10 Aug 2002, Russell Nelson wrote:

 I agree that it's irrelevant.  So why is he trying to argue from
 authority (always a fallacy anyway) without *even* having any way to
 prove that he is that authority?

What has 'authority' got to do with it? Arguments from authority are
-worthless-. Make up your own mind as to its validity, who cares about
their 'proof'.

-Who- is irrelevant. What damns his argument -is- his appeal to
-authority-. Anyone who bases their argument on 'He said...' has already
lost the discussion and invalidated any point they might make. It's one of
the primary fallacies of (for example) Tim May and his consistent appeal
to who he knows or what 'they' said.

We agree, what I don't understand is why you keep expecting that dead
horse to get up...keep asking those damning questions ;)


 --


  Conform and be dull..J. Frank Dobie

 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org






Re: Seth on TCPA at Defcon/Usenix

2002-08-11 Thread David Wagner

AARG! Anonymous  wrote:
His description of how the Document Revocation List could work is
interesting as well.  Basically you would have to connect to a server
every time you wanted to read a document, in order to download a key
to unlock it.  Then if someone decided that the document needed
to un-exist, they would arrange for the server no longer to download
that key, and the document would effectively be deleted, everywhere.

Well, sure.  It's certainly how I had always envisioned one might build
a secure Document Revocation List using TCPA or Palladium.  I didn't
realize this sort of thing would need explaining; I assumed it would be
obvious to cypherpunk types.  But I'm glad this risk is now clear.

Note also that Document Revocation List functionality could arise
without any intent to create it.  Application developers might implement
this connect to a server feature to enforce some seemingly innocuous
function, like enforcing software licenses and preventing piracy.  Then,
after the application has been deployed with this innocuous feature,
someone else might eventually notice that it could also be used for
document revocation.  Thus, Document Revocation List functionality could
easily become widespread without anyone realizing it or intending it.
This is a risk we should make think about now, rather than after it is
too late.




RE: Seth on TCPA at Defcon/Usenix

2002-08-11 Thread Lucky Green

David wrote:
 AARG! Anonymous  wrote:
 His description of how the Document Revocation List could work is 
 interesting as well.  Basically you would have to connect to 
 a server 
 every time you wanted to read a document, in order to 
 download a key to 
 unlock it.  Then if someone decided that the document needed to 
 un-exist, they would arrange for the server no longer to 
 download that 
 key, and the document would effectively be deleted, everywhere.
 
 Well, sure.  It's certainly how I had always envisioned one 
 might build a secure Document Revocation List using TCPA or 
 Palladium.  I didn't realize this sort of thing would need 
 explaining; I assumed it would be obvious to cypherpunk 
 types.  But I'm glad this risk is now clear.

To ensure priority for my Monday filings, I must point out at this time
that while AARG and David's methods of implementing a DRL are certainly
feasible, I believe a preferred method of implementing a DRL would be to
utilize features offered by an infrastructure, such as Palladium, that
supports time-limited documents: rather than requiring online access
whenever the document is attempted to be displayed, the document's
display permissions would be renewed periodically. If the display
software misses one or more updates, the document display software will
cease to display the document.

BTW, does anybody here know if there is still an email time stamping
server in operation? The references that I found to such servers appear
to be dead.

Thanks,
--Lucky




Re: Seth on TCPA at Defcon/Usenix

2002-08-11 Thread Joseph Ashwood

- Original Message -
From: AARG! Anonymous [EMAIL PROTECTED]
[brief description of Document Revocation List]

Seth's scheme doesn't rely on TCPA/Palladium.

Actually it does, in order to make it valuable. Without a hardware assist,
the attack works like this:
Hack your software (which is in many ways almost trivial) to reveal it's
private key.
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
problem solved

With hardware assist, trusted software, and a trusted execution environment
it (doesn't) work like this:
Hack you software.
DOH! the software won't run
revert back to the stored software.
Hack the hardware (extremely difficult).
Virtualize the hardware at a second layer, using the grabbed private key
Hack the software
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
Once the file is released the server revokes all trust in your client,
effectively removing all files from your computer that you have not
decrypted yet
problem solved? only for valuable files

Of course if you could find some way to disguise which source was hacked,
things change.

Now about the claim that MS Word would not have this feature. It almost
certainly would. The reason being that business customers are of particular
interest to MS, since they supply a large portion of the money for Word (and
everything else). Businesses would want to be able to configure their
network in such a way that critical business information couldn't be leaked
to the outside world. Of course this removes the advertising path of
conveniently leaking carefully constructed documents to the world, but for
many companies that is a trivial loss.
Joe




Re: CDR: Re: Challenge to TCPA/Palladium detractors

2002-08-11 Thread Jim Choate


On Sun, 11 Aug 2002, Russell Nelson wrote:

 AARG!Anonymous writes:
   I'd like the Palladium/TCPA critics to offer an alternative proposal
   for achieving the following technical goal:
   
 Allow computers separated on the internet to cooperate and share data
 and computations such that no one can get access to the data outside
 the limitations and rules imposed by the applications.
 
 Can't be done.  I don't have time to go into ALL the reasons.
 Fortunately for me, any one reason is sufficient.  #1: it's all about
 the economics.

Complete noise. Not only can it be done, it is being done.

Plan 9 has a namespace that is -per processs-, each process is distributed
(via a bidding process), and the process owner can be anonymized (though
this takes some extension beyond the base OS).

http://plan9.bell-labs.com


 --


  Conform and be dull..J. Frank Dobie

 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org







Re: Challenge to David Wagner on TCPA

2002-08-11 Thread Ben Laurie

Lucky Green wrote:
 Ray wrote:
 
From: James A. Donald [EMAIL PROTECTED]
Date: Tue, 30 Jul 2002 20:51:24 -0700

On 29 Jul 2002 at 15:35, AARG! Anonymous wrote:

both Palladium and TCPA deny that they are designed to restrict
what applications you run.  The TPM FAQ at 
http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf reads


They deny that intent, but physically they have that capability.

To make their denial credible, they could give the owner 
access to the private key of the TPM/SCP.  But somehow I 
don't think that jibes with their agenda.
 
 
 Probably not surprisingly to anybody on this list, with the exception of
 potentially Anonymous, according to the TCPA's own TPM Common Criteria
 Protection Profile, the TPM prevents the owner of a TPM from exporting
 the TPM's internal key. The ability of the TPM to keep the owner of a PC
 from reading the private key stored in the TPM has been evaluated to E3
 (augmented). For the evaluation certificate issued by NIST, see:
 
 http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-VR-TPM.pdf

Obviously revealing the key would defeat any useful properties of the 
TPM/SCP. However, unless the machine refuses to run stuff unless signed 
by some other key, its a matter of choice whether you run an OS that has 
the aforementioned properties.

Of course, its highly likely that if you want to watch products of Da 
Mouse on your PC, you will be obliged to choose a certain OS. In order 
to avoid more sinister uses, it makes sense to me to ensure that at 
least one free OS gets appropriate signoff (and no, that does not 
include a Linux port by HP). At least, it makes sense to me if I assume 
that the certain other OS will otherwise become dominant. Which seems 
likely.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: Re: Challenge to TCPA/Palladium detractors

2002-08-11 Thread Joseph Ashwood

- Original Message -
From: Eugen Leitl [EMAIL PROTECTED]
 Can anyone shed some light on this?

Because of the sophistication of modern processors there are too many
variables too be optimized easily, and doing so can be extremely costly.
Because of this diversity, many compilers use semi-random exploration.
Because of this random exploration the compiler will typically compile the
same code into a different executable. With small programs it is likely to
find the same end-point, because of the simplicity. The larger the program
the more points for optimization, so for something as large as say PGP you
are unlikely to find the same point twice, however the performance is likely
to be eerily similar.

There are bound to be exceptions, and sometimes the randomness in the
exploration appears non-existent, but I've been told that some versions the
DEC GEM
compiler used semi-randomness a surprising amount because it was a very fast
way to narrow down to an approximate best (hence the extremely fast
compilation and execution). It is likely that MS VC uses such techniques.
Oddly extremely high level languages don't have as many issues, each command
spans so many instructions that a pretuned set of command instructions will
often provide very close to optimal performance.

I've been told that gcc does not apparently use randomness to any
significant degree, but I admit I have not examined the source code to
confirm or deny this.
Joe





Re: dangers of TCPA/palladium

2002-08-11 Thread Mike Rosing

On 11 Aug 2002, David Wagner wrote:

 Ben Laurie  wrote:
 Mike Rosing wrote:
  The purpose of TCPA as spec'ed is to remove my control and
  make the platform trusted to one entity.  That entity has the master
  key to the TPM.
 
  Now, if the spec says I can install my own key into the TPM, then yes,
  it is a very useful tool.
 
 Although the outcome _may_ be like this, your understanding of the TPM
 is seriously flawed - it doesn't prevent your from running whatever you
 want, but what it does do is allow a remote machine to confirm what you
 have chosen to run.
 
 It helps to argue from a correct starting point.

 I don't understand your objection.  It doesn't look to me like Rosing
 said anything incorrect.  Did I miss something?

 It doesn't look like he ever claimed that TCPA directly prevents one from
 running what you want to; rather, he claimed that its purpose (or effect)
 is to reduce his control, to the benefit of others.  His claims appear
 to be accurate, according to the best information I've seen.

In a way everybody is right.  It's true that TPM doesn't interfere with
operating code - it interferes with the user controlling the way the code
operates.  For a remote machine to *know* that a TPM is doing what it
says, the user of the remote machine must be denied access (physcially)
from the operating code.  I don't see any way around that physical
reality.  We can go on forever about the social implications (and I hope
we will :-)  but I don't see a flaw in my basic understanding.

Now, if the remote machine and I have predefined trust, then I can use
regular PKI and I don't need TCPA or a TPM.  It seems to me the
fundamental question is still who is charge of what.

Patience, persistence, truth,
Dr. mike






Re: responding to claims about TCPA

2002-08-11 Thread David Wagner

AARG! Anonymous  wrote:
In fact, you are perfectly correct that Microsoft architectures would
make it easy at any time to implement DRL's or SNRL's.  They could do
that tomorrow!  They don't need TCPA.  So why blame TCPA for this feature?

The relevance should be obvious.  Without TCPA/Palladium, application
developers can try to build a Document Revocation List, but it will
be easily circumvented by anyone with a clue.  With TCPA/Palladium,
application developers could build a Document Revocation List that could
not be easily circumvented.

Whether or not you think any application developer would ever create such
a feature, I hope you can see how TCPA/Palladium increases the risks here.
It enables Document Revocation Lists that can't be bypassed.  That's a
new development not feasible in today's world.

To respond to your remark about bias: No, bringing up Document Revocation
Lists has nothing to do with bias.  It is only right to seek to understand
the risks in advance.  I don't understand why you seem to insinuate
that bringing up the topic of Document Revocation Lists is an indication
of bias.  I sincerely hope that I misunderstood you.




Seth on TCPA at Defcon/Usenix

2002-08-11 Thread AARG! Anonymous

Seth Schoen of the EFF has a good blog entry about Palladium and TCPA
at http://vitanuova.loyalty.org/2002-08-09.html.  He attended Lucky's
presentation at DEF CON and also sat on the TCPA/Palladium panel at
the USENIX Security Symposium.

Seth has a very balanced perspective on these issues compared to most
people in the community.  It makes me proud to be an EFF supporter
(in fact I happen to be wearing my EFF T-shirt right now).

His description of how the Document Revocation List could work is
interesting as well.  Basically you would have to connect to a server
every time you wanted to read a document, in order to download a key
to unlock it.  Then if someone decided that the document needed
to un-exist, they would arrange for the server no longer to download
that key, and the document would effectively be deleted, everywhere.

I think this clearly would not be a feature that most people would accept
as an enforced property of their word processor.  You'd be unable to
read things unless you were online, for one thing.  And any document you
were relying on might be yanked away from you with no warning.  Such a
system would be so crippled that if Microsoft really did this for Word,
sales of vi would go through the roof.

It reminds me of an even better way for a word processor company to make
money: just scramble all your documents, then demand ONE MILLION DOLLARS
for the keys to decrypt them.  The money must be sent to a numbered
Swiss account, and the software checks with a server to find out when
the money has arrived.  Some of the proposals for what companies will
do with Palladium seem about as plausible as this one.

Seth draws an analogy with Acrobat, where the paying customers are
actually the publishers, the reader being given away for free.  So Adobe
does have incentives to put in a lot of DRM features that let authors
control publication and distribution.

But he doesn't follow his reasoning to its logical conclusion when dealing
with Microsoft Word.  That program is sold to end users - people who
create their own documents for the use of themselves and their associates.
The paying customers of Microsoft Word are exactly the ones who would
be screwed over royally by Seth's scheme.  So if we follow the money
as Seth in effect recommends, it becomes even more obvious that Microsoft
would never force Word users to be burdened with a DRL feature.

And furthermore, Seth's scheme doesn't rely on TCPA/Palladium.  At the
risk of aiding the fearmongers, I will explain that TCPA technology
actually allows for a much easier implementation, just as it does in so
many other areas.  There is no need for the server to download a key;
it only has to download an updated DRL, and the Word client software
could be trusted to delete anything that was revoked.  But the point
is, Seth's scheme would work just as well today, without TCPA existing.
As I quoted Ross Anderson saying earlier with regard to serial number
revocation lists, these features don't need TCPA technology.

So while I have some quibbles with Seth's analysis, on the whole it is
the most balanced that I have seen from someone who has no connection
with the designers (other than my own writing, of course).  A personal
gripe is that he referred to Lucky's critics, plural, when I feel
all alone out here.  I guess I'll have to start using the royal we.
But he redeemed himself by taking mild exception to Lucky's slide show,
which is a lot farther than anyone else has been willing to go in public.




Re: dangers of TCPA/palladium

2002-08-11 Thread Ben Laurie

Mike Rosing wrote:
Why exactly is this so much more of a threat than, say, flash BIOS
upgrades?  The BIOS has a lot more power over your machine than the
TPM does.
 
 
 The difference is fundamental: I can change every bit of flash in my BIOS.
 I can not change *anything* in the TPM.  *I* control my BIOS.  IF, and
 only IF, I can control the TPM will I trust it to extend my trust to
 others.  The purpose of TCPA as spec'ed is to remove my control and
 make the platform trusted to one entity.  That entity has the master
 key to the TPM.
 
 Now, if the spec says I can install my own key into the TPM, then yes,
 it is a very useful tool.  It would be fantastic in all the portables
 that have been stolen from the FBI for example.  Assuming they use a
 password at turn on, and the TPM is used to send data over the net,
 then they'd know where all their units are and know they weren't
 compromised (or how badly compromised anyway).
 
 But as spec'ed, it is very seriously flawed.

Although the outcome _may_ be like this, your understanding of the TPM 
is seriously flawed - it doesn't prevent your from running whatever you 
want, but what it does do is allow a remote machine to confirm what you 
have chosen to run.

It helps to argue from a correct starting point.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: dangers of TCPA/palladium

2002-08-11 Thread Ben Laurie

AARG!Anonymous wrote:
 Adam Back writes:
 
 
- Palladium is a proposed OS feature-set based on the TCPA hardware
(Microsoft)
 
 
 Actually there seem to be some hardware differences between TCPA and
 Palladium.  TCPA relies on a TPM, while Palladium uses some kind of
 new CPU mode.  Palladium also includes some secure memory, a concept
 which does not exist in TCPA.

This is correct. Palladium has ring -1, and memory that is only 
accessible to ring -1 (or I/O initiated by ring -1).

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




Re: Seth on TCPA at Defcon/Usenix

2002-08-11 Thread John Gilmore

 It reminds me of an even better way for a word processor company to make
 money: just scramble all your documents, then demand ONE MILLION DOLLARS
 for the keys to decrypt them.  The money must be sent to a numbered
 Swiss account, and the software checks with a server to find out when
 the money has arrived.  Some of the proposals for what companies will
 do with Palladium seem about as plausible as this one.

Isn't this how Windows XP and Office XP work?  They let you set up the
system and fill it with your data for a while -- then lock up and
won't let you access your locally stored data, until you put the
computer on the Internet and register it with Microsoft.  They
charge less than a million dollars to unhand your data, but otherwise
it looks to me like a very similar scheme.

There's a first-person report about how Office XP made the computers
donated for the 9/11 missing persons database useless after several
days of data entry -- so the data was abandoned, and re-entered into a
previous (non-DRM) Microsoft word processor.  The report came through
this very mailing list.  See:

  http://www.mail-archive.com/cryptography@wasabisystems.com/msg02134.html

This scenario of word processor vendors denying people access to their
own documents until they do something to benefit the vendor is not
just plausible -- it's happening here and now.

John




Re: Challenge to TCPA/Palladium detractors

2002-08-11 Thread Russell Nelson

AARG!Anonymous writes:
  I'd like the Palladium/TCPA critics to offer an alternative proposal
  for achieving the following technical goal:
  
Allow computers separated on the internet to cooperate and share data
and computations such that no one can get access to the data outside
the limitations and rules imposed by the applications.

Can't be done.  I don't have time to go into ALL the reasons.
Fortunately for me, any one reason is sufficient.  #1: it's all about
the economics.  You have failed to specify that the cost of breaking
into the data has to exceed the value of the data.  But even if you
did that, you'd have to assume that the data was never worth more than
that to *anyone*.  As soon as it was worth that, they could break into
the data, and data is, after all, just data.

Ignore economics at your peril.

-- 
-russ nelson  http://russnelson.com |
Crynwr sells support for free software  | PGPok | businesses persuade
521 Pleasant Valley Rd. | +1 315 268 1925 voice | governments coerce
Potsdam, NY 13676-3213  | +1 315 268 9201 FAX   |




Re: responding to claims about TCPA

2002-08-11 Thread AARG! Anonymous

AARG! wrote:
 I asked Eric Murray, who knows something about TCPA, what he thought
 of some of the more ridiculous claims in Ross Anderson's FAQ (like the
 SNRL), and he didn't respond.  I believe it is because he is unwilling
 to publicly take a position in opposition to such a famous and respected
 figure.

John Gilmore replied:

 Many of the people who know something about TCPA are constrained
 by NDA's with Intel.  Perhaps that is Eric's problem -- I don't know.

Maybe, but he could reply just based on public information.  Despite this
he was unable or unwilling to challenge Ross Anderson.


 One of the things I told them years ago was that they should draw
 clean lines between things that are designed to protect YOU, the
 computer owner, from third parties; versus things that are designed to
 protect THIRD PARTIES from you, the computer owner.  This is so
 consumers can accept the first category and reject the second, which,
 if well-informed, they will do.

I don't agree with this distinction.  If I use a smart card chip that
has a private key on it that won't come off, is that protecting me from
third parties, or vice versa?  If I run a TCPA-enhanced Gnutella that
keeps the RIAA from participating and easily finding out who is running
supernodes (see http://slashdot.org/article.pl?sid=02/08/09/2347245 for
the latest crackdown), I benefit, even though the system technically is
protecting the data from me.

I wrote earlier that if people were honest, trusted computing would not
be necessary, because they would keep their promises.  Trusted computing
allows people to prove to remote users that they will behave honestly.
How does that fit into your dichotomy?  Society has evolved a myriad
mechanisms to allow people to give strong evidence that they will keep
their word; without them, trade and commerce would be impossible.  By your
logic, these protect third parties from you, and hence should be rejected.
You would discard the economic foundation for our entire world.


 TCPA began in that protect third parties from the owner category,
 and is apparently still there today.  You won't find that out by
 reading Intel's modern public literature on TCPA, though; it doesn't
 admit to being designed for, or even useful for, DRM.  My guess is
 that they took my suggestion as marketing advice rather than as a
 design separation issue.  Pitch all your protect-third-party products
 as if they are protect-the-owner products was the opposite of what I
 suggested, but it's the course they (and the rest of the DRM industry)
 are on.  E.g. see the July 2002 TCPA faq at:

   http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf

   3. Is the real goal of TCPA to design a TPM to act as a DRM or
  Content Protection device? 
   No.  The TCPA wants to increase the trust ... [blah blah blah]

 I believe that No is a direct lie.

David Grawrock of Intel has an interesting slide presentation on
TCPA at http://www.intel.com/design/security/tcpa/slides/index.htm.
His slide 3 makes a good point: All 5 members had very different ideas
of what should and should not be added.  It's possible that some of
the differences in perspective and direction on TCPA are due to the
several participants wanting to move in different ways.  Some may have
been strictly focused on DRM; others may have had a more expansive
vision of how trust can benefit all kinds of distributed applications.
So it's not clear that you can speak of the real goal of TCPA, when
there are all these different groups with different ideas.

 Intel has removed the first
 public version 0.90 of the TCPA spec from their web site, but I have
 copies, and many of the examples in the mention DRM, e.g.:

   http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf  (still there)

 This TCPA white paper says that the goal is ubiquity.  Another way to
 say that is monopoly.

Nonsense.  The web is ubiquitous, but is not a monopoly.

 The idea is to force any other choices out of
 the market, except the ones that the movie  record companies want.
 The first scenario (PDF page 7) states: For example, before making
 content available to a subscriber, it is likely that a service
 provider will need to know that the remote platform is trustworthy.

That same language is in the Credible Interoperability document presently
on the web site at
http://www.trustedcomputing.org/docs/Credible_Interoperability_020702.pdf.
So I don't think there is necessarily any kind of a cover-up here.


   http://www.trustedpc.org/home/pdf/spec0818.pdf (gone now)

 Even this 200-page TCPA-0.90 specification, which is carefully written
 to be obfuscatory and misleading, leaks such gems as: These features
 encourage third parties to grant access to by the platform to
 information that would otherwise be denied to the platform (page 14).
 The 'protected store' feature...can hold and manipulate confidential
 data, and will allow the release or use of that data only in the
 presence of a particular

It won't happen here (was Re: TCPA/Palladium -- likely future implications)

2002-08-10 Thread Marcel Popescu

From: AARG! Anonymous [EMAIL PROTECTED]

 Think about it: this one innocuous little box holding the TPME key could
 ultimately be the root of trust for the entire world.  IMO we should
 spare no expense in guarding it and making sure it is used properly.
 With enough different interest groups keeping watch, we should be able
 to keep it from being used for anything other than its defined purpose.

Now I know the general opinion of AARG, and I can't say I much disagree. But
I want to comment on something else here, which I find to be a common trait
with US citizens: it can't happen here. The Chinese gov't can do anything
they like, because any citizen who would try to keep watch would find
himself shot. What basic law of the universe says that this can't happen in
the US? What exactly will prevent them, 10 years from now, to say
compelling state interests require that we get to do whatever we want with
the little box? You already have an official gov't against 1st ammendment
policy, from what I've read.

Mark




Re: responding to claims about TCPA

2002-08-10 Thread John Gilmore

 I asked Eric Murray, who knows something about TCPA, what he thought
 of some of the more ridiculous claims in Ross Anderson's FAQ (like the
 SNRL), and he didn't respond.  I believe it is because he is unwilling
 to publicly take a position in opposition to such a famous and respected
 figure.

Many of the people who know something about TCPA are constrained
by NDA's with Intel.  Perhaps that is Eric's problem -- I don't know.

(I have advised Intel about its security and privacy initiatives,
under a modified NDA, for a few years now.  Ross Anderson has also.
Dave Farber has also.  It was a win-win: I could hear about things
early enough to have a shot at convincing Intel to do the right things
according to my principles; they could get criticized privately rather
than publicly, if they actually corrected the criticized problems
before publicly announcing.  They consult me less than they used to,
probably because I told them too many things they didn't want to
hear.)

One of the things I told them years ago was that they should draw
clean lines between things that are designed to protect YOU, the
computer owner, from third parties; versus things that are designed to
protect THIRD PARTIES from you, the computer owner.  This is so
consumers can accept the first category and reject the second, which,
if well-informed, they will do.  If it's all a mishmash, then
consumers will have to reject all of it, and Intel can't even improve
the security of their machines FOR THE OWNER, because of their history
of security projects that work against the buyer's interest, such as
the Pentium serial number and HDCP.

TCPA began in that protect third parties from the owner category,
and is apparently still there today.  You won't find that out by
reading Intel's modern public literature on TCPA, though; it doesn't
admit to being designed for, or even useful for, DRM.  My guess is
that they took my suggestion as marketing advice rather than as a
design separation issue.  Pitch all your protect-third-party products
as if they are protect-the-owner products was the opposite of what I
suggested, but it's the course they (and the rest of the DRM industry)
are on.  E.g. see the July 2002 TCPA faq at:

  http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf

  3. Is the real goal of TCPA to design a TPM to act as a DRM or
 Content Protection device? 
  No.  The TCPA wants to increase the trust ... [blah blah blah]

I believe that No is a direct lie.  Intel has removed the first
public version 0.90 of the TCPA spec from their web site, but I have
copies, and many of the examples in the mention DRM, e.g.:

  http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf  (still there)

This TCPA white paper says that the goal is ubiquity.  Another way to
say that is monopoly.  The idea is to force any other choices out of
the market, except the ones that the movie  record companies want.
The first scenario (PDF page 7) states: For example, before making
content available to a subscriber, it is likely that a service
provider will need to know that the remote platform is trustworthy.
  
  http://www.trustedpc.org/home/pdf/spec0818.pdf (gone now)

Even this 200-page TCPA-0.90 specification, which is carefully written
to be obfuscatory and misleading, leaks such gems as: These features
encourage third parties to grant access to by the platform to
information that would otherwise be denied to the platform (page 14).
The 'protected store' feature...can hold and manipulate confidential
data, and will allow the release or use of that data only in the
presence of a particular combination of access rghts and software
environment.  ... Applications that might benefit include ... delivery
of digital content (such as movies and songs).  (page 15).

Of course, they can't help writing in the DRM mindset regardless of
their intent to confuse us.  In that July 2002 FAQ again:

  9. Does TCPA certify applications and OS's that utilize TPMs? 
  
  No.  The TCPA has no plans to create a certifying authority to
  certify OS's or applications as trusted.  The trust model the TCPA
  promotes for the PC is: 1) the owner runs whatever OS or
  applications they want; 2) The TPM assures reliable reporting of the
  state of the platform; and 3) the two parties engaged in the
  transaction determine if the other platform is trusted for the
  intended transaction.

The transaction?  What transaction?  They were talking about the
owner getting reliable reporting on the security of their applications
and OS's and -- uh -- oh yeah, buying music or video over the Internet.

Part of their misleading technique has apparently been to present no
clear layman's explanations of the actual workings of the technology.
There's a huge gap between the appealing marketing sound bites -- or
FAQ lies -- and the deliberately dry and uneducational 400-page
technical specs.  My own judgement is that this is probably
deliberate, since if the public had an accurate 20-page

Re: Challenge to David Wagner on TCPA

2002-08-10 Thread D.Popkin

-BEGIN PGP SIGNED MESSAGE-

AARG! Anonymous [EMAIL PROTECTED] writes:

 Lucky Green wrote:
  Ray wrote:
   If I buy a lock I expect that by demonstrating ownership I 
   can get a replacement key or have a locksmith legally open it.

  It appears the days when this was true are waning. At least in the PC
  platform domain.

 We have had other systems which work like this for a long while.
 Many consumer devices are sealed such that if you open them you void
 the warranty.  This is to your advantage as a consumer; ...

There is exactly one person in the world qualified to decide what's to
the advantage of that consumer, and it's not AARG! Anonymous.

-BEGIN PGP SIGNATURE-
Version: 2.6.3ia
Charset: noconv

iQBVAwUBPVRO0PPsjZpmLV0BAQEwrQH/eXqkJVmXYmqNtweg6246KMXmCGekK/h6
HNmnd65WeR2A84pJdJFb8jZ2CX6bJ+XrboaDv8klJCo21xTkFxWIuA==
=DL2o
-END PGP SIGNATURE-




Re: Challenge to TCPA/Palladium detractors

2002-08-10 Thread R. Hirschfeld

 Date: Fri, 9 Aug 2002 19:30:09 -0700
 From: AARG!Anonymous [EMAIL PROTECTED]

 Re the debate over whether compilers reliably produce identical object
 (executable) files:
 
 The measurement and hashing in TCPA/Palladium will probably not be done
 on the file itself, but on the executable content that is loaded into
 memory.  For Palladium it is just the part of the program called the
 trusted agent.  So file headers with dates, compiler version numbers,
 etc., will not be part of the data which is hashed.
 
 The only thing that would really break the hash would be changes to the
 compiler code generator that cause it to create different executable
 output for the same input.  This might happen between versions, but
 probably most widely used compilers are relatively stable in that
 respect these days.  Specifying the compiler version and build flags
 should provide good reliability for having the executable content hash
 the same way for everyone.

A trivial observation: this cannot be true across hardware platforms.
TCPA claims to be platform and OS agnostic, but Palladium does not.




Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread Eugen Leitl

On Wed, 7 Aug 2002, Matt Crawford wrote:

 Unless the application author can predict the exact output of the
 compilers, he can't issue a signature on the object code.  The

Same version of compiler on same source using same build produces 
identical binaries.

 compilers then have to be inside the trusted base, checking a
 signature on the source code and reflecting it somehow through a
 signature they create for the object code.

You have the source, compile it using the official compiler and the
official build options, and record the blob. Entity X claims it runs the
same system that it gave you the source for. You can't sign it, but you
can verify the signed blob is the same.

The blob can still be trojaned, but you can disassemble and debug it.




Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread Eugen Leitl

On Fri, 9 Aug 2002, David Howe wrote:

 It doesn't though - that is the point. I am not sure if it is simply
 that there are timestamps in the final executable, but Visual C (to give
 a common example, as that is what the windows PGP builds compile with)
 will not give an identical binary, even if you hit rebuild all twice
 in close succession and compare the two outputs, nothing having changed.

I've just verified this also occurs on OpenSSL under RH 7.3 (gcc --version
2.96). I haven't done a binary diff, but I'm also suspecting a time stamp.  
Can anyone shed some light on this?




Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread Ken Brown

James A. Donald wrote:
 
 --
 On Wed, 7 Aug 2002, Matt Crawford wrote:
   Unless the application author can predict the exact output of
   the compilers, he can't issue a signature on the object code.
   The
 
 On 9 Aug 2002 at 10:48, Eugen Leitl wrote:
  Same version of compiler on same source using same build
  produces identical binaries.
 
 This has not been my experience.

Nor anyone else's

If only because the exact image you depends on a hell of a lot of
programs   libraries. Does anyone expect /Microsoft/ of all software
suppliers to provide consistent versioning and reproducible or
predictable software environments? These are the people who brought us
DLL Hell. These are the people who fell into the MDAC versioning
fiasco. 

Ken




RE: Challenge to TCPA/Palladium detractors

2002-08-09 Thread Sam Simpson

I'm not surprised that most people couldn't produce a matching PGP
executbales - most compilers (irrespective of compiler optimisation
options etc) include a timestamp in the executable.

Regards,

Sam Simpson
[EMAIL PROTECTED]
http://www.samsimpson.com/
Mob:  +44 (0) 7866 726060
Home Office:  +44 (0) 1438 229390
Fax:  +44 (0) 1438 726069

On Fri, 9 Aug 2002, Lucky Green wrote:

 Anonymous wrote:
  Matt Crawford replied:
   Unless the application author can predict the exact output of the
   compilers, he can't issue a signature on the object code.  The
   compilers then have to be inside the trusted base, checking a
   signature on the source code and reflecting it somehow through a
   signature they create for the object code.
 
  It's likely that only a limited number of compiler
  configurations would be in common use, and signatures on the
  executables produced by each of those could be provided.
  Then all the app writer has to do is to tell people, get
  compiler version so-and-so and compile with that, and your
  object will match the hash my app looks for. DEI

 The above view may be overly optimistic. IIRC, nobody outside PGP was
 ever able to compile a PGP binary from source that matched the hash of
 the binaries built by PGP.

 --Lucky Green


 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]




TCPA/Palladium -- likely future implications (Re: dangers of TCPA/palladium)

2002-08-09 Thread Adam Back

On Thu, Aug 08, 2002 at 09:15:33PM -0700, Seth David Schoen wrote:
 Back in the Clipper days [...] how do we know that this
 tamper-resistant chip produced by Mykotronix even implements the
 Clipper spec correctly?.

The picture is related but has some extra wrinkles with the
TCPA/Palladium attestable donglization of CPUs.

- It is always the case that targetted people can have hardware
attacks perpetrated against them.  (Keyboard sniffers placed during
court authorised break-in as FBI has used in mob case of PGP using
Mafiosa [1]).

- In the clipper case people didn't need to worry much if the clipper
chip had malicious deviations from spec, because Clipper had an openly
stated explicit purpose to implement a government backdoor -- there's
no need for NSA to backdoor the explicit backdoor.

But in the TCPA/Palladium case however the hardware tampering risk you
identify is as you say relevant:

- It's difficult for the user to verify hardware.  

- Also: it wouldn't be that hard to manufacture plausibly deniable
implementation mistakes that could equate to a backdoor -- eg the
random number generators used to generate the TPM/SCP private device
keys.

However, beyond that there is an even softer target for would-be
backdoorers:

- the TCPA/Palladium's hardware manufacturers endoresment CA keys.

these are the keys to the virtual kingdom formed -- the virtual
kingdom by the closed space within which attested applications and
software agents run.


So specifically let's look at the questions arising:

1. What could a hostile entity(*) do with a copy of a selection of
hardware manufacturer endorsement CA private keys?

( (*) where the hostile entity candidates would be for example be
secret service agencies, law enforcement or homeland security
agencies in western countries, RIAA/MPAA in pursuit of their quest to
exercise their desire to jam and DoS peer-to-peer file sharing
networks, the Chinese government, Taiwanese government (they may lots
of equipment right) and so on).

a. Who needs to worry -- who will be targetted?

Who needs to worry about this depends on how overt third-party
ownership of these keys is, and hence the pool of people who would
likely be targetted.  

If it's very covert, it would only be used plausibly deniably and only
for Nat Sec / Homeland Security purposes.  It if becomse overt over
time -- a publicly acknowledged, but supposedly court controlled
affair like Clipper, or even more widely desired by a wide-range of
entities for example: keys made available to RIAA / MPAA so they can
do the hacking they have been pushing for -- well then we all need to
worry.


To analyse the answer to question 1, we first need to think about
question 2:

2. What kinds of TCPA/Palladium integrity depending trusted
applications are likely to be built?

Given the powerful (though balance of control changing) new remotely
attestable security features provided by TCPA/Palladium, all kinds of
remote services become possible, for example (though all to the extent
of hardware tamper-resistance and belief that your attacker doesn't
have access to a hardware endorsement CA private key):

- general Application Service Providers (ASPs) that you don't have to
trust to read your data

- less traceable peer-to-peer applications

- DRM applications that make a general purpose computer secure against
BORA (Break Once Run Anywhere), though of course not secure against
ROCA (Rip Once Copy Everywhere) -- which will surely continue to
happen with ripping shifting to hardware hackers.

- general purpose unreadable sandboxes to run general purpose
CPU-for-rent computing farms for hire, where the sender knows you
can't read his code, you can't read his input data, or his output
data, or tamper with the computation.

- file-sharing while robustly hiding knowledge and traceability of
content even to the node serving it -- previously research question,
now easy coding problem with efficient

- anonymous remailers where you have more assurance that a given node
is not logging and analysing the traffic being mixed by it


But of course all of these distributed applications, positive and
negative (depending on your view point), are limited in their
assurance of their non-cryptographically assured aspects:

- to the tamper resistance of the device

- to the extent of the users confidence that an entity hostile to them
doesn't have the endorsement CA's private key for the respective
remote servers implementing the network application they are relying
on


and a follow-on question to question 2:

3. Will any software companies still aim for cryptographic assurance?

(cryptographic assurance means you don't need to trust someone not to
reverse engineer the application -- ie you can't read the data because
it is encrypted with a key derived from a password that is only stored
in the users head).

The extended platform allows you to build new classes of applications
which aren't currently buildable to cryptographic levels

Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread James A. Donald

--
On 9 Aug 2002 at 17:15, AARG! Anonymous wrote:
 to understand it you need a true picture of TCPA rather than the 
 false one which so many cypherpunks have been promoting.

As TCPA is currently vaporware, projections of what it will be, 
and how it will be used are judgments, and are not capable of 
being true or false, though they can be plausible or implausible.

Even with the best will in the world, and I do not think the 
people behind this have the best will in the world, there is an 
inherent conflict between tamper resistance and general purpose 
programmability.  To prevent me from getting at the bits as they 
are sent to my sound card or my video card, the entire computer, 
not just the dongle, has to be somewhat tamper resistant, which is 
going to make the entire computer somewhat less general purpose 
and programmable, thus less useful.

The people behind TCPA might want to do something more evil than 
you say they want to do, if they want to do what you say they want 
to do they might be prevented by law enforcement which wants 
something considerably more far reaching and evil, and if they
want to do it, and law enforcement refrains from reaching out and 
taking hold of their work, they still may be unable to do it for 
technical reasons. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 D7ZUyyAS+7CybaH0GT3tHg1AkzcF/LVYQwXbtqgP
 2HBjGwLqIOW1MEoFDnzCH6heRfW1MNGv1jXMIvtwb




Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread Mike Rosing

On Fri, 9 Aug 2002, AARG! Anonymous wrote:

 : Allow computers separated on the internet to cooperate and share data
 : and computations such that no one can get access to the data outside
 : the limitations and rules imposed by the applications.

 It seems to me that my definition is far more useful and appropriate in
 really understanding what TCPA/Palladium are all about.  Adam, what do
 you think?

Just because you can string words together and form a definition doesn't
make it realizable.  Once data is in the clear it can be copied, and no
rules can change that.  Either the data is available to the user, and
they can copy it - or the data is not available to the user, and there's
nothing they can do when their machine does somebody elses calculations.

 I have a couple of suggestions.  One early application for TCPA is in
 closed corporate networks.  In that case the company usually buys all
 the computers and prepares them before giving them to the employees.
 At that time, the company could read out the TPM public key and sign
 it with the corporate key.  Then they could use that cert rather than
 the TPME cert.  This would protect the company's sensitive data against
 eavesdroppers who manage to virtualize their hardware.

And guess what?  I can buy that today!  I don't need either TCPA or
Palladium.  So why do we need TCPA?

 Think about it: this one innocuous little box holding the TPME key could
 ultimately be the root of trust for the entire world.  IMO we should
 spare no expense in guarding it and making sure it is used properly.
 With enough different interest groups keeping watch, we should be able
 to keep it from being used for anything other than its defined purpose.

Man, I want the stuff you are smoking!  One attack point is the root of
trust for the whole world!!???!!!  Take another hit dude, and make sure
you see lots of colors too.

Patience, persistence, truth,
Dr. mike




TCPA ad nauseum

2002-08-09 Thread Mike Rosing

On Fri, 9 Aug 2002, AARG! Anonymous wrote:

 Of course his analysis is spoiled by an underlying paranoia.  So let me
 ask just one question.  How exactly is subversion of the TPM a greater
 threat than subversion of your PC hardware today?  How do you know that
 Intel or AMD don't already have back doors in their processors that
 the NSA and other parties can exploit?  Or that Microsoft doesn't have
 similar backdoors in its OS?  And similarly for all the other software
 and hardware components that make up a PC today?

 In other words, is this really a new threat?  Or are you unfairly blaming
 TCPA for a problem which has always existed and always will exist?

The difference is that *anyone* can see what goes on inside an Intel or
AMD processor.  Only the key holder of the TPM can see inside the
protected code space.  You can't put back doors into the code now
because the code is visible to all users.  The purpose of crypto is to
hide information even tho the attacker can see all the machinery work.
If you don't want to have the machinery visible, then use a sealed
system (like smart card).

Patience, persistence, truth,
Dr. mike




Re: dangers of TCPA/palladium

2002-08-08 Thread R. Hirschfeld

 Date: Mon, 5 Aug 2002 16:25:26 -0700
 From: AARG!Anonymous [EMAIL PROTECTED]

 The only way that TCPA will become as popular as you fear is if it really
 solves problems for people.  Otherwise nobody will pay the extra $25 to
 put it in their machine.

Although I support the vote-with-your-wallet paradigm, this analysis
seems overly simplistic to me.  Macrovision doesn't solve problems for
most VCR purchasers, but they pay for it anyway.  They have no choice.

In some cases people are required to buy and use something that they
might not otherwise be inclined to pay for, e.g., catalytic converters
in automobiles (which also use palladium).  It doesn't seem reasonable
to similarly require TCPA in computers, but legislators might think
(or be lobbied) otherwise.

If the fears that some people have expressed prove justified and TCPA
becomes primarily a means to enforce draconian copyright restrictions,
then people may well choose to pay for it just to regain pre-TCPA
functionality.  In that case, the problems it solves for them are the
same ones it causes!




Re: Challenge to TCPA/Palladium detractors

2002-08-08 Thread R. Hirschfeld

 Date: Wed, 7 Aug 2002 12:50:29 -0700
 From: AARG!Anonymous [EMAIL PROTECTED]

 I'd like the Palladium/TCPA critics to offer an alternative proposal
 for achieving the following technical goal:
 
   Allow computers separated on the internet to cooperate and share data
   and computations such that no one can get access to the data outside
   the limitations and rules imposed by the applications.

The model and the goal are a bit different, but how about secure
multi-party computation, as introduced by Chaum, Crepeau, and Damgard
in 1988 and subsequently refined by others?




Re: Challenge to TCPA/Palladium detractors

2002-08-08 Thread Matt Crawford

 I'd like the Palladium/TCPA critics to offer an alternative proposal
 for achieving the following technical goal:
   Allow computers separated on the internet to cooperate and share data
   and computations such that no one can get access to the data outside
   the limitations and rules imposed by the applications.
 [...]
 You could even have each participant compile the program himself,
 but still each app can recognize the others on the network and
 cooperate with them.

Unless the application author can predict the exact output of the
compilers, he can't issue a signature on the object code.  The
compilers then have to be inside the trusted base, checking a
signature on the source code and reflecting it somehow through a
signature they create for the object code.




Re: Challenge to TCPA/Palladium detractors

2002-08-08 Thread AARG! Anonymous

Anon wrote:
 You could even have each participant compile the program himself,
 but still each app can recognize the others on the network and
 cooperate with them.

Matt Crawford replied:
 Unless the application author can predict the exact output of the
 compilers, he can't issue a signature on the object code.  The
 compilers then have to be inside the trusted base, checking a
 signature on the source code and reflecting it somehow through a
 signature they create for the object code.

It's likely that only a limited number of compiler configurations would
be in common use, and signatures on the executables produced by each of
those could be provided.  Then all the app writer has to do is to tell
people, get compiler version so-and-so and compile with that, and your
object will match the hash my app looks for.
DEI




  1   2   3   >