Re: Give cheese to france?
> But let's cut to the chase. Assume that all private grocery > store owners want to exclude people from their stores. Now > assume that 100% of them agree that effective Tuesday, only > those people who have a receipt for a $100 or more donation to > George W Bush (or Hillary Clinton, whatever) may enter their > property to shop for groceries. > > Their right? Why not? > \ > > Yes, of course it is their "right." > > But these silly "lifeboat ethics" debates were tiresome more than 30 years ago, > argued in person. Typing answers to them is even more tiresome. That's not what it was. > > Read some of the sources. Few of you social democrats here have done so. Maybe you could re read Locke's first and second treatise. Can't hurt. > > Which is OK, as it's your life. But you don't belong on this list if you have not. I assume if I refuse to leave, I can expect you to shoot me? I take it you favor the bearing of arms by citizens. I do too, severely. But I submit for your consideration that 10,000 screaming Sarah Bradys can't damage the too-tentative support of those rights as effectively as one gun-nut loon who advocates shooting unarmed, non-violent soccer moms at the mall who refuse to be expelled on trespassing grounds due to the war protests printed on their t-shirts. > > > --Tim May > "That government is best which governs not at all." --Henry David Thoreau
Re: [IP] Open Source TCPA driver and white papers (fwd)
Mike Rosing wrote: > Thanks Eugen, It looks like the IBM TPM chip is only a key > store read/write device. It has no code space for the kind of > security discussed in the TCPA. The user still controls the machine > and can still monitor who reads/writes the chip (using a pci bus > logger for example). There is a lot of emphasis on TPM != Palladium, > and TPM != DRM. TPM can not control the machine, and for DRM to work > the way RIAA wants, TPM won't meet their needs. TPM looks pretty useful > as it sits for real practical security tho, so I can see why IBM > wants those !='s to be loud and clear. Note while Safford downplays remote attestation in the rebuttal paper, TCPA specs include remote attestation, which seems on the face of it mostly a DRM enabling feature. So I would say that Ross Anderson, Lucky and other detractors have it right despite this attempted rebuttal. It is true that the secure boot, key storage features are largely user beneficial features. He says "there is currently no CA, but it is unclear if this is the "privacy CA" or the "endoresement CA". In any case it may just be that early revisions of the software they haven't implemented this feature yet. He also mentions "no one asked for it" (the privacy CA to issue certificates for use with the remote attestation feature one presumes). He says you can turn of the endorsement feature. The main features of TCPA are: - key storage - secure boot - sealing - remote attestation the first 3 are user focussed features, and the last is DRM focussed. Sealing also interacts with remote attestation, in that it frustrates software only (as opposed to hardware hacking) attempts to later by pass restrictions imposed on download with remote attestation. Palladium is more flexible and secure in what it can enforce because of the ring-1 because it offers smaller attack surface (the TOR) instead of the whole kernel and all device drivers with TCPA. Safford also argues that it's not fair to critize TCPA based on DRM friendly features because it's tech neutral and anything can be used for good and bad (whatever your point of view). However I'd argue remote attestation as designed has no really plausible non-DRM use and could easily be dropped without loss of user functionality. There are other applications for remote attestation -- for example VPN server trying to assure security of client machines. However these types of applications can still be provided in ways that are useless for DRM -- eg retaining remote attestation but allowing the user with user present test to put the device in a "debug mode" where the bootstrap hashes don't match what is loaded. This kind of thing would be handy for debugging anyway and does not lose user security of remote attestation if it is only configurable via user present test. TCPA doesn't provide user present test (secure path to keyboard and screen as Palladium does), but there is a TCPA bios, and presumably that could have a flag and is (one hopes!) already design to not be software changeable. Similar arguments apply to the Palladium remote attestation function. MS has also made attempts to downplay DRM centric role of Palladium.
Re: Supremes and thieves.
On Mon, 20 Jan 2003 15:34:09 +0800, you wrote: > > None of this is relevant to individuals copying works for scholarship or > research. "Fair Use" still applies. > > Matthew X wrote: > > > We learned as much on Wednesday when the U.S. Supreme Court ruled that > > Congress can repeatedly extend copyright terms, as it did most recently in > > 1998 when it added 20 years to the terms for new and existing works. > > > He wanted to publish on the Internet a number of books that should have > > been in the public domain by now. The people who still control most older > > works have demonstrated little or no interest in making them available -- > > and our heritage dwindles by the day. > > How can it "dwindle?" The public domain can only increase or hold > steady. All this ruling does is damp the rate of increase. > > Marc de Piolenc It is like Medicare, and taxes on the rich. The absence of an increase is called a "cut", or "dwindling". See "Doublespeak". More seriously, the public domain becomes stagnant and dwindles due to a substantial reduction in new additions after the extention of copyrights. It becomes stagnant and dwindles, like a library that is not adding books and no longer receives magazines. ~~~
Re: Television
Re- which software does big letters, I can just say that I am appalled by the ignorance. It's the standard unix "banner" program, some 20 years old. ## # ## # # # # # ## ## # ## ## # ## ## # ## ## # ## ## # ##### ## ### ## ### #### ### ## ## ## ## ### ### ### ## # ## # # # # # ## ## # ## ## # ## ## # ## ## # ## ## # ## ## # ## # # ## # ## # ## ## ### ### # # ### ## ## # # ## ### #### ### ## ##### ## # ## # ## # ## ### #### # # ### ## ## # ## # # # # # ## # ## # ## # ## # # # ## ## ## ## ## # # ## # ## # ## ## # # # # #
Re: Television
On Wed, 08 Jan 2003 10:01:22 -0500, you wrote: > > WOW! > > While I may agree that Tim May seems to like anarchy as long as he's in charge of >it, he does come up with some truly destabilising and dangerous ideas every now and >then. > > Like his alter ego Jim Choate, there's some real signal burried under that noise so >at least token measures of respect every now and then are due. > I've never come across a Tim May post that I thought wasn't worth the time it took to read it. They are all either amusing, informative, or provocative, or some combination of those. I like that. I can't say that about many other posters.
Re: 60 years to rights restoration
Major Variola (ret) feared: > None have yet commented that in 60 years, there will be no one left that > remembers > what things were like. Will people really just wimp out to this? Do you really think all those militia people will just doze on? Maybe people need to start asking themselves, "What would Timmy do?" Remember this -- it matters not how many F16s and Stealth Bombers the fedz have, and it doesn't really matter how many feebs they have, or snitches, or what sort of TIA they employ -- against individuals, or small 3 person cells, they have no chance. If one person went out and started killing cops with a silenced .22, back of the head shots, he could easily kill 100 or more, maybe a 1000 without getting caught. If a 1000 rise up ... And every one that rises up will inspire a thousand more.
the wrong poem
The saddest thing here is that this gets reported without any comment. Snuffing journalists seems far more cost effective than offing pigs. http://www.startribune.com/stories/1576/3443476.html .. Baker discounted claims by federal authorities that Maali had financially supported terrorist groups when he made donations to Palestinian charities, and that an essay and poems he had written showed sympathy for suicide bombers in Israel. ..
buying gold
I decided to look into these DMT Rands that everyone has been yammering about. I'm not terribly surprised to see that they are a product of the Laissez Faire City grifters. No thanks. This little investigation did spark my interest in aquiring gold, however. Do readers of this list have suggestions about what type of bullion to obtain? How about referrals to reputable dealers? What is the best type of gold to aquire, and where should one get it?
Re: Random Privacy
Greg Broiles wrote about randomizing survey answers: > That doesn't sound like a solution to me - they haven't provided anything > to motivate people to answer honestly, nor do they address the basic > problem, which is relying on the good will and good behavior of the > marketers - if a website visitor is unwilling to trust a privacy policy > which says "We'll never use this data to annoy or harm you", they're > likely to be unimpressed with a privacy policy which says "We'll use > fancy math tricks to hide the information you give us from ourselves." > > That's not going to change unless they move the randomizing behavior > off of the marketer's machine and onto the visitor's machine, > allowing the visitor to observe and verify the correct operation of > the privacy technology .. which is about as likely as a real audit of > security-sensitive source code, where that likelihood is tiny now and > shrinking rapidly the closer we get to the TCPA/Palladium nirvana. On the contrary, TCPA/Palladium can solve exactly this problem. It allows the marketers to *prove* that they are running a software package that will randomize the data before storing it. And because Palladium works in opposition to their (narrowly defined) interests, they can't defraud the user by claiming to randomize the data while actually storing it for marketing purposes. Ironically, those who like to say that Palladium "gives away root on your computer" would have to say in this example that the marketers are giving away root to private individuals. In answering their survey questions, you in effect have root privileges on the surveyor's computers, by this simplistic analysis. This further illustrates how misleading is this characterization of Palladium technology in terms of root privileges.
RE: Cryptogram: Palladium Only for DRM
Lucky Green wrote: > AARG! Wrote: > > In addition, I have argued that trusted computing in general > > will work very well with open source software. It may even > > be possible to allow the user to build the executable himself > > using a standard compilation environment. > > What AARG! is failing to mention is that Microsoft holds that Palladium, > and in particular Trusted Operating Root ("nub") implementations, are > subject to Microsoft's DRM-OS patent. Absent a patent license from > Microsoft, any individual developer, open source software development > effort, and indeed any potential competitor of Microsoft that wishes to > create a Palladium-like TOR would do so in violation of Microsoft's > patent. U.S. Patent law takes a dim view of such illegal infringers: > willful infringers, in particular infringers that generate a profit from > their creation of a non-Microsoft version of a TOR face the risk of a > court ordering such infringers to pay treble damages. That's too bad. Trusted computing is a very interesting technology with many beneficial uses. It is a shame that Microsoft has a patent on this and will be enforcing it, which will reduce the number of competing implementations. Of course, those like Lucky who believe that trusted computing technology is evil incarnate are presumably rejoicing at this news. Microsoft's patent will limit the application of this technology. And the really crazy people are the ones who say that Palladium is evil, but Microsoft is being unfair in not licensing their patent widely! > As of this moment, Microsoft has not provided the open source community > with a world-wide, royalty-free, irrevocable patent license to the > totality of Microsoft's patents utilized in Palladium's TOR. Since open > source efforts therefore remain legally prohibited from creating > non-Microsoft TORs, AARG!'s lauding of synergies between Palladium and > open source software development appears premature. Well, I was actually referring to open source applications, not the OS. Palladium-aware apps that are available in source form can be easily verified to make sure that they aren't doing anything illicit. Since the behavior of the application is relatively opaque while it is protected by Palladium technology, the availability of source serves as an appropriate balance. But it does appear that Microsoft plans to make the source to the TOR available in some form for review, so apparently they too see the synergy between open (or at least published) source and trusted computing. > > [1] A message from Microsoft's Peter Biddle on 5 Aug 2002; > > unfortunately the cryptography archive is missing this day's > > messages. "The memory isn't encrypted, nor are the apps nor > > the TOR when they are on the hard drive. Encrypting the apps > > wouldn't make them more secure, so they aren't encrypted." > > In the interest of clarity, it probably should be mentioned that any > claims Microsoft may make stating that Microsoft will not encrypt their > software or software components when used with Palladium of course only > applies to Microsoft and not to the countless other software vendors > creating applications for the Windows platform. UNLESS Microsoft means that the architecture is such that it does not support encrypting applications! The wording of the statement above seems stronger than just "we don't plan on encrypting our apps at this time". There are a couple of reasons to believe that this might be true. First, it is understood that Palladium hashes the secure portions of the applications that run. This hash is used to decrypt data and for reporting to remote servers what software is running. It seems likely that the hash is computed when the program is loaded. So the probable API is something like "load this file into secure memory, hash it and begin executing it." With that architecture, it would not work to do as some have proposed: the program loads data into secure memory, decrypts it and jumps to it. The hash would change depending on the data and the program would no longer be running what it was supposed to. This would actually undercut the Palladium security guarantees; the program would no longer be running code with a known hash. Second, the Microsoft Palladium white paper at http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp describes the secure memory as "trusted execution space". This suggests that this memory is designed for execution, not for holding data. The wording hints at an architectural separation between code and data, when in the trusted mode. > Lastly, since I have seen this error in a number of articles, it seems > worth mentioning that Microsoft stated explicitly that increasing the > security of DRM schemes protecting digital entertainment content, but > not executable code, formed the impetus to the Palladium effort. Further reason to believe that Palladium's architecture may not support the encrypt
Re: Cryptogram: Palladium Only for DRM
Niels Ferguson wrote: > At 16:04 16/09/02 -0700, AARG! Anonymous wrote: > >Nothing done purely in software will be as effective as what can be done > >when you have secure hardware as the foundation. I discuss this in more > >detail below. > > But I am not suggesting to do it purely in software. Read the Intel manuals > for their CPUs. There are loads of CPU features for process separation, > securing the operating system, etc. The hardware is all there! > Maybe I have to explain the obvious. On boot you boot first to a secure > kernel, much like the Pd kernel but running on the main CPU. This kernel > then creates a virtual machine to run the existing Windows in, much like > VMware does. The virus is caught inside this virtual machine. All you need > to do is make sure the virtual machine cannot write to the part of the disk > that contains your security kernel. Thanks for the explanation. Essentially you can create a virtualized Palladium, where you emulate the functionality of the secure hardware. The kernel normally has access to all of memory, for example, but you can virtualize the MMU as VmWare does, so that some memory is inaccessible even to the kernel, while the kernel can still run pretty much the same. Similarly your virtualizing software could comput a hash of code that loads into this secure area and even mimic the Palladium functionality to seal and unseal data based on that hash. All this would be done at a level which was inaccessible to ordinary Windows code, so it would be basically as secure as Palladium is with hardware. The one thing that you don't get with this method is secure attestation. There is no way your software can prove to a remote system that it is running a particular piece of code, as is possible with Pd hardware. However I believe you see this as not a security problem, since in your view the only use for such functionality is for DRM. I do think there are some issues with this approach to creating a secure system, even independent of the attestation issue. One is performance. According to a presentation by the VMWare chief scientist [1], VMWare sees slowdowns of from 8 to 30 percent on CPU-bound processes, with graphics-intensive applications even worse, perhaps a factor of 2 slower. Maybe Windows could do better than this, but users aren't going to be happy about giving up 10 percent or more of their CPU performance. Also, Palladium hardware provides protection against DMA devices: "Even PCI DMA can't read or write memory which has been reserved to a nub's or TA's use (including the nub's or TA's code). This memory is completely inaccessible and can only be accessed indirectly through API calls. The chipset on the motherboard is modified to enforces this sort of restriction." [2] It's conceivable that without this hardware protection, a virus could exploit a security flaw in an external device and get access to the secure memory provided by a virtualized Palladium. But these are not necessarily major problems. Generally I now agree with your comments, and those of others, that the security benefits of Palladium - except for secure remote attestation - can be provided using existing and standard PC hardware, and that the software changes necessary are much like what would be necessary for the current Palladium design, plus the work to provide VMWare-type functionality. However that still leaves the issue of remote attestation... > Who are you protecting against? If the system protects the interests of the > user, then you don't need to protect the system from the user. The security > chip is only useful if you try to take control away from the user. This is a simplistic view. There are many situations in which it is in the interests of the user to be able to prove to third parties that he is unable to commit certain actions. A simple example is possession of a third-party cryptographic certificate. The only reason that is valuable is because the user can't create it himself. Any time someone shows a cert, they are giving up some control in order to get something. They can't modify that certificate without rendering it useless. They are limited in what they can do with it. But it is these very limitations that make the cert valuable. But let me cut to the chase and provide some examples where remote attestation, allowing the user to prove that he is running a particular program and that it is unmolested, is useful. These will hopefully encourage you to modify your belief that "The 'secure chip' in Pd is only needed for DRM. All other claimed benefits of Pd can be achieved using existing hardware. To me this is an objectively verifyable truth." I don't think any of these examples could be solved with software plus existing hardware alone. The first example is a secure online game client. Ra
Palladium block diagram
Here is a functional block diagram of the Palladium software, based on a recent presentation by Microsoft. My notes were a bit sketchy as I rushed to copy down this slide, so there may be some slight errors. But this is basically what was shown. (Use a monospace font to see it properly.) Normal Mode Trusted Mode +-+ | +---+ | +---+ | | Nubsys | App |---o | o---| Agent | | USER | exe | PdLib | | | PdLib | | |o +---+ | +---+ | ||| o | ||| | | ||+---|---| |\-\ | | | | | | | | |+-+++|+--+ | || Main OS || NubMgr |--o | o--| Secure Executive | | KERNEL | +++++ | sys |||Nexus | | | | HAL | Drivers | ++|+--+ | | +-+-+ | | | | | +-+ The idea is that initially only the left half exists. To launch Palladium the user runs the Nubsys.exe program. This goes into kernel mode and loads the NubMgr.sys module, which initiates trusted mode and launches the secure executive or "nexus". (This is what is also sometimes called the Nub or the TOR.) When a Palladium-aware app is launched in user mode, it is linked with a PdLib and requests to the Nexus to load the corresponding Trusted Agent. The Agent runs trusted in user mode, and has its own PdLib which lets it make system calls into the Nexus. The Trusted Agent and the application then communicate back and forth across the trusted/normal mode boundary.
New Palladium FAQ available
Microsoft has apparently just made available a new FAQ on its controversial Palladium technology at http://www.microsoft.com/PressPass/features/2002/aug02/0821PalladiumFAQ.asp. Samples: > Q: I've heard that "Palladium" will force people to run only > Microsoft-approved software. > > A: "Palladium" can't do that. "Palladium's" security chip (the SSC) > and other features are not involved in the boot process of the OS or in > the OS's decision to load an application that doesn't use a "Palladium" > feature and execute it. Because "Palladium" is not involved in the > boot process, it cannot block an OS, or drivers or any non-"Palladium" > PC application from running. Only the user decides what "Palladium" > applications get to run. Anyone can write an application to take advantage > of "Palladium" APIs without notifying Microsoft (or anyone else) or > getting its (or anyone else's) approval. > Q: Some people have claimed that "Palladium" will enable Microsoft or > other parties to detect and remotely delete unlicensed software from my > PC. Is this true? > > A: No. As stated above, the function of "Palladium" is to make digitally > signed statements about code identity and hide secrets from other > "Palladium" applications and regular Windows kernel- and user-mode > spaces. "Palladium" doesn't have any features that make it easier for > an application to detect or delete files. Hopefully Microsoft will continue to release information about Palladium. That should help to bring some of the more outrageous rumors under control.
Cryptographic privacy protection in TCPA
Here are some more thoughts on how cryptography could be used to enhance user privacy in a system like TCPA. Even if the TCPA group is not receptive to these proposals, it would be useful to have an understanding of the security issues. And the same issues arise in many other kinds of systems which use certificates with some degree of anonymity, so the discussion is relevant even beyond TCPA. The basic requirement is that users have a certificate on a long-term key which proves they are part of the system, but they don't want to show that cert or that key for most of their interactions, due to privacy concerns. They want to have their identity protected, while still being able to prove that they do have the appropriate cert. In the case of TCPA the key is locked into the TPM chip, the "endorsement key"; and the cert is called the "endorsement certificate", expected to be issued by the chip manufacturer. Let us call the originating cert issuer the CA in this document, and the long-term cert the "permanent certificate". A secondary requirement is for some kind of revocation in the case of misuse. For TCPA this would mean cracking the TPM and extracting its key. I can see two situations where this might lead to revocation. The first is a "global" crack, where the extracted TPM key is published on the net, so that everyone can falsely claim to be part of the TCPA system. That's a pretty obvious case where the key must be revoked for the system to have any integrity at all. The second case is a "local" crack, where a user has extracted his TPM key but keeps it secret, using it to cheat the TCPA protocols. This would be much harder to detect, and perhaps equally significantly, much harder to prove. Nevertheless, some way of responding to this situation is a desirable security feature. The TCPA solution is to use one or more Privacy CAs. You supply your permanent cert and a new short-term "identity" key; the Privacy CA validates the cert and then signs your key, giving you a new cert on the identity key. For routine use on the net, you show your identity cert and use your identity key; your permanent key and cert are never shown except to the Privacy CA. This means that the Privacy CA has the power to revoke your anonymity; and worse, he (or more precisely, his key) has the power to create bogus identities. On the plus side, the Privacy CA can check a revocation list and not issue a new identity cert of the permanent key has been revoked. And if someone has done a local crack and the evidence is strong enough, the Privacy CA can revoke his anonymity and allow his permanent key to be revoked. Let us now consider some cryptographic alternatives. The first is to use Chaum blinding for the Privacy CA interaction. As before, the user supplies his permanent cert to prove that he is a legitimate part of the system, but instead of providing an identity key to be certified, he supplies it in blinded form. The Privacy CA signs this blinded key, the user strips the blinding, and he is left with a cert from the Privacy CA on his identity key. He uses this as in the previous example, showing his privacy cert and using his privacy key. In this system, the Privacy CA no longer has the power to revoke your anonymity, because he only saw a blinded version of your identity key. However, the Privacy CA retains the power to create bogus identities, so the security risk is still there. If there has been a global crack, and a permanent key has been revoked, the Privacy CA can check the revocation list and prevent that user from acquiring new identities, so revocation works for global cracks. However, for local cracks, where there is suspicious behavior, there is no way to track down the permanent key associated with the cheater. All his interactions are done with an identity key which is unlinkable. So there is no way to respond to local cracks and revoke the keys. Actually, in this system the Privacy CA is not really protecting anyone's privacy, because it doesn't see any identities. There is no need for multiple Privacy CAs and it would make more sense to merge the Privacy CA and the original CA that issues the permanent certs. That way there would be only one agency with the power to forge keys, which would improve accountability and auditability. One problem with revocation in both of these systems, especially the one with Chaum blinding, is that existing identity certs (from before the fraud was detected) may still be usable. It is probably necessary to have identity certs be valid for only a limited time so that users with revoked keys are not able to continue to use their old identity certs. Brands credentials provide a more flexible and powerful approach than Chaum blinding which can potentially provide improvements. The basic setup is the same: users would go to a Privacy CA and show their permanent cert, getting a new cert on an identity key which they would use on the net. The difference i
Re: Cryptographic privacy protection in TCPA
Dr. Mike wrote, patiently, persistently and truthfully: > > On Fri, 16 Aug 2002, AARG! Anonymous wrote: > > > Here are some more thoughts on how cryptography could be used to > > enhance user privacy in a system like TCPA. Even if the TCPA group > > is not receptive to these proposals, it would be useful to have an > > understanding of the security issues. And the same issues arise in > > many other kinds of systems which use certificates with some degree > > of anonymity, so the discussion is relevant even beyond TCPA. > > OK, I'm going to discuss it from a philosophical perspective. > i.e. I'm just having fun with this. Fine, but let me put this into perspective. First, although the discussion is in terms of a centralized issuer, the same issues arise if there are multiple issuers, even in a web-of-trust situation. So don't get fixated on the fact that my analysis assumed a single issuer - that was just for simplicity in what was already a very long message. The abstract problem to be solved is this: given that there is some property which is being asserted via cryptographic certificates (credentials), we want to be able to show possession of that property in an anonymous way. In TCPA the property is "being a valid TPM". Another example would be a credit rating agency who can give out a "good credit risk" credential. You want to be able to show it anonymously in some cases. Yet another case would be a state drivers license agency which gives out an "over age 21" credential, again where you want to be able to show it anonymously. This is actually one of the oldest problems which proponents of cryptographic anonymity attempted to address, going back to David Chaum's seminal work. TCPA could represent the first wide-scale example of cryptographic credentials being shown anonymously. That in itself ought to be of interest to cypherpunks. Unfortunately TCPA is not going for full cryptographic protection of anonymity, but relying on Trusted Third Parties in the form of Privacy CAs. My analysis suggests that although there are a number of solutions in the cryptographic literature, none of them are ideal in this case. Unless we can come up with a really strong solution that satisfies all the security properties, it is going to be hard to make a case that the use of TTPs is a mistake. > I don't like the idea that users *must* have a "certificate". Why > can't each person develop their own personal levels of trust and > associate it with their own public key? Using multiple channels, > people can prove their key is their word. If any company wants to > associate a certificate with a customer, that can have lots of meanings > to lots of other people. I don't see the usefullness of a "permanent > certificate". Human interaction over electronic media has to deal > with monkeys, because that's what humans are :-) A certificate is a standardized and unforgeable statement that some person or key has a particular property, that's all. The kind of system you are talking about, of personal knowledge and trust, can't really be generalized to an international economy. > > Actually, in this system the Privacy CA is not really protecting > > anyone's privacy, because it doesn't see any identities. There is no > > need for multiple Privacy CAs and it would make more sense to merge > > the Privacy CA and the original CA that issues the permanent certs. > > That way there would be only one agency with the power to forge keys, > > which would improve accountability and auditability. > > I really, REALLY, *REALLY*, don't like the idea of one entity having > the ability to create or destroy any persons ability to use their > computer at whim. You are suggesting that one person (or small group) > has the power to create (or not) and revoke (or not!) any and all TPM's! > > I don't know how to describe my astoundment at the lack of comprehension > of history. Whoever makes a statement about a property should have the power to revoke it. I am astounded that you think this is a radical notion. If one or a few entities become widely trusted to make and revoke statements that people care about, it is because they have earned that trust. If the NY Times says something is true, people tend to believe it. If Intel says that such-and-such a key is in a valid TPM, people may choose to believe this based on Intel's reputation. If Intel later determines that the key has been published on the net and so can no longer be presumed to be a TPM key, it revokes its statement. This does not mean that Intel would destroy any person's ability to use their computer on a whim. First, having the TPM cert revoked would not destroy your ability to use
Re: TCPA not virtualizable during ownership change
Basically I agree with Adam's analysis. At this point I think he understands the spec equally as well as I do. He has a good point about the Privacy CA key being another security weakness that could break the whole system. It would be good to consider how exactly that problem could be eliminated using more sophisticated crypto. Keep in mind that there is a need to be able to revoke Endorsement Certificates if it is somehow discovered that a TPM has been cracked or is bogus. I'm not sure that would be possible with straight Chaum blinding or Brands credentials. I would perhaps look at Group Signature schemes; there is one with efficient revocation being presented at Crypto 02. These involve a TTP but he can't forge credentials, just link identity keys to endorsement keys (in TCPA terms). Any system which allows for revocation must have such linkability, right? As for Joe Ashwood's analysis, I think he is getting confused between the endorsement key, endorsement certificate, and endorsement credentials. The first is the key pair created on the TPM. The terms PUBEK and PRIVEK are used to refer to the public and private parts of the endorsement key. The endorsement certificate is an X.509 certificate issued on the endorsement key by the manufacturer. The manufacturer is also called the TPM Entity or TPME. The endorsement credential is the same as the endorsement certificate, but considered as an abstract data structure rather than as a specific embodiment. The PRIVEK never leaves the chip. The PUBEK does, but it is considered sensitive because it is a de facto unique identifier for the system, like the Intel processor serial number which caused such controversy a few years ago. The endorsement certificate holds the PUBEK value (in the SubjectPublicKeyInfo field) and so is equally a de facto unique identifier, hence it is also not too widely shown.
Re: Overcoming the potential downside of TCPA
Joe Ashwood writes: > Actually that does nothing to stop it. Because of the construction of TCPA, > the private keys are registered _after_ the owner receives the computer, > this is the window of opportunity against that as well. Actually, this is not true for the endoresement key, PUBEK/PRIVEK, which is the "main" TPM key, the one which gets certified by the "TPM Entity". That key is generated only once on a TPM, before ownership, and must exist before anyone can take ownership. For reference, see section 9.2, "The first call to TPM_CreateEndorsementKeyPair generates the endorsement key pair. After a successful completion of TPM_CreateEndorsementKeyPair all subsequent calls return TCPA_FAIL." Also section 9.2.1 shows that no ownership proof is necessary for this step, which is because there is no owner at that time. Then look at section 5.11.1, on taking ownership: "user must encrypt the values using the PUBEK." So the PUBEK must exist before anyone can take ownership. > The worst case for > cost of this is to purchase an additional motherboard (IIRC Fry's has them > as low as $50), giving the ability to present a purchase. The > virtual-private key is then created, and registered using the credentials > borrowed from the second motherboard. Since TCPA doesn't allow for direct > remote queries against the hardware, the virtual system will actually have > first shot at the incoming data. That's the worst case. I don't quite follow what you are proposing here, but by the time you purchase a board with a TPM chip on it, it will have already generated its PUBEK and had it certified. So you should not be able to transfer a credential of this type from one board to another one. > The expected case; > you pay a small registration fee claiming that you "accidentally" wiped your > TCPA. The best case, you claim you "accidentally" wiped your TCPA, they > charge you nothing to remove the record of your old TCPA, and replace it > with your new (virtualized) TCPA. So at worst this will cost $50. Once > you've got a virtual setup, that virtual setup (with all its associated > purchased rights) can be replicated across an unlimited number of computers. > > The important part for this, is that TCPA has no key until it has an owner, > and the owner can wipe the TCPA at any time. From what I can tell this was > designed for resale of components, but is perfectly suitable as a point of > attack. Actually I don't see a function that will let the owner wipe the PUBEK. He can wipe the rest of the TPM but that field appears to be set once, retained forever. For example, section 8.10: "Clear is the process of returning the TPM to factory defaults." But a couple of paragraphs later: "All TPM volatile and non-volatile data is set to default value except the endorsement key pair." So I don't think your fraud will work. Users will not wipe their endorsement keys, accidentally or otherwise. If a chip is badly enough damaged that the PUBEK is lost, you will need a hardware replacement, as I read the spec. Keep in mind that I only started learning this stuff a few weeks ago, so I am not an expert, but this is how it looks to me.
TCPA hack delay appeal
It seems that there is (a rather brilliant) way to bypass TCPA (as spec-ed.) I learned about it from two separate sources, looks like two independent slightly different hacks based on the same protocol flaw. Undoubtedly, more people will figure this out. It seems wise to suppress the urge and craving for fame and NOT to publish the findings at this time. Let them build the thing into zillion chips first. If you must, post the encrypted time-stamped solution identifying you as the author but do not release the key before TCPA is in many, many PCs.
Re: Challenge to David Wagner on TCPA
Brian LaMacchia writes: > So the complexity isn't in how the keys get initialized on the SCP (hey, it > could be some crazy little hobbit named Mel who runs around to every machine > and puts them in with a magic wand). The complexity is in the keying > infrastructure and the set of signed statements (certificates, for lack of a > better word) that convey information about how the keys were generated & > stored. Those statements need to be able to represent to other applications > what protocols were followed and precautions taken to protect the private > key. Assuming that there's something like a cert chain here, the root of > this chain chould be an OEM, an IHV, a user, a federal agency, your company, > etc. Whatever that root is, the application that's going to divulge secrets > to the SCP needs to be convinced that the key can be trusted (in the > security sense) not to divulge data encrypted to it to third parties. > Palladium needs to look at the hardware certificates and reliably tell > (under user control) what they are. Anyone can decide if they trust the > system based on the information given; Palladium simply guarantees that it > won't tell anyone your secrets without your explicit request.. This makes a lot of sense, especially for "closed" systems like business LANs and WANs where there is a reasonable centralized authority who can validate the security of the SCP keys. I suggested some time back that since most large businesses receive and configure their computers in the IT department before making them available to employees, that would be a time that they could issue private certs on the embedded SCP keys. The employees' computers could then be configured to use these private certs for their business computing. However the larger vision of trusted computing leverages the global internet and turns it into what is potentially a giant distributed computer. For this to work, for total strangers on the net to have trust in the integrity of applications on each others' machines, will require some kind of centralized trust infrastructure. It may possibly be multi-rooted but you will probably not be able to get away from this requirement. The main problem, it seems to me, is that validating the integrity of the SCP keys cannot be done remotely. You really need physical access to the SCP to be able to know what key is inside it. And even that is not enough, if it is possible that the private key may also exist outside, perhaps because the SCP was initialized by loading an externally generated public/private key pair. You not only need physical access, you have to be there when the SCP is initialized. In practice it seems that only the SCP manufacturer, or at best the OEM who (re) initializes the SCP before installing it on the motherboard, will be in a position to issue certificates. No other central authorities will have physical access to the chips on a near-universal scale at the time of their creation and installation, which is necessary to allow them to issue meaningful certs. At least with the PGP "web of trust" people could in principle validate their keys over the phone, and even then most PGP users never got anyone to sign their keys. An effective web of trust seems much more difficult to achieve with Palladium, except possibly in small groups that already trust each other anyway. If we do end up with only a few trusted root keys, most internet-scale trusted computing software is going to have those roots built in. Those keys will be extremely valuable, potentially even more so than Verisign's root keys, because trusted computing is actually a far more powerful technology than the trivial things done today with PKI. I hope the Palladium designers give serious thought to the issue of how those trusted root keys can be protected appropriately. It's not going to be enough to say "it's not our problem". For trusted computing to reach its potential, security has to be engineered into the system from the beginning - and that security must start at the root!
TCPA and Open Source
One of the many charges which has been tossed at TCPA is that it will harm free software. Here is what Ross Anderson writes in the TCPA FAQ at http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html (question 18): > TCPA will undermine the General Public License (GPL), under which > many free and open source software products are distributed > > At least two companies have started work on a TCPA-enhanced version of > GNU/linux. This will involve tidying up the code and removing a number > of features. To get a certificate from the TCPA corsortium, the sponsor > will then have to submit the pruned code to an evaluation lab, together > with a mass of documentation showing why various known attacks on the code > don't work. First we have to deal with this certificate business. Most readers probably assume that you need this cert to use the TCPA system, and even that you would not be able to boot into this Linux OS without such a cert. This is part of the longstanding claim that TCPA will only boot signed code. I have refuted this claim many times, and asked for those who disagree to point to where in the spec it says this, without anyone doing so. I can only hope that interested readers may be beginning to believe my claim since if it were false, somebody would have pointed to chapter and verse in the TCPA spec just to shut me up about it if for no better reason. However, Ross is actually right that TCPA does support a concept for a certificate that signs code. It's called a Validation Certificate. The system can hold a number of these VC's, which represent the "presumed correct" results of the measurement (hashing) process on various software and hardware components. In the case of OS code, then, there could be VC's representing specific OS's which could boot. The point is that while this is a form of signed code, it's not something which gives the TPM control over what OS can boot. Instead, the VCs are used to report to third party challengers (on remote systems) what the system configuration of this system is "supposed" to be, along with what it actually is. It's up to the remote challenger to decide if he trusts the issuer of the VC, and if so, he will want to see that the actual measurement (i.e. the hash of the OS) matches the value in the VC. So what Ross says above could potentially be true, if and when TCPA compliant operating systems begin to be developed. Assuming that there will be some consortium which will issue VC's for operating systems, and assuming that third parties will typically trust that consortium and only that one, then you will need to get a VC from that group in order to effectively participate in the TCPA network. This doesn't mean that your PC won't boot the OS without such a cert; it just means that if most people choose to trust the cert issuer, then you will need to get a cert from them to get other people to trust your OS. It's much like the power Verisign has today with X.509; most people's software trusts certs from Verisign, so in practice you pretty much need to get a cert from them to participate in the X.509 PKI. So does this mean that Ross is right, that free software is doomed under TCPA? No, for several reasons, not least being a big mistake he makes: > (The evaluation is at level E3 - expensive enough to keep out > the free software community, yet lax enough for most commercial software > vendors to have a chance to get their lousy code through.) Although the > modified program will be covered by the GPL, and the source code will > be free to everyone, it will not make full use of the TCPA features > unless you have a certificate for it that is specific to the Fritz chip > on your own machine. That is what will cost you money (if not at first, > then eventually). The big mistake is the belief that the cert is specific to the "Fritz" chip (Ross's cute name for the TPM). Actually the VC data structure is not specific to any one PC. It is intentionally designed not to have any identifying information in it that will represent a particular system. This is because the VC cert has to be shown to remote third parties in order to get them to trust the local system, and TCPA tries very hard to protect user privacy (believe it or not!). If the VC had computer-identifying information in it, then it would be a linkable identifier for all TCPA interactions on the net, which would defeat all of the work TCPA does with Privacy CAs and whatnot to try to protect user privacy. If you understand this, you will see that the whole TCPA concept requires VC's not to be machine specific. People always complain when I point to the spec, as if the use of facts were somehow unfair in this dispute. But if you are willing, you can look at section 9.5.4 of http://www.trustedcomputing.org/docs/main%20v1_1b.pdf, which is the data structure for the validation certificate. It is an X.509 attribute certificate, which is a type of cert that would normally be expected to point back
Another application for trusted computing
I thought of another interesting application for trusted computing systems: mobile agents. These are pieces of software which get transferred from computer to computer, running on each system, communicating with the local system and other visiting agents, before migrating elsewhere. This was a hot technology from a couple of years ago, but it never really went anywhere (so to speak). Part of the reason was that there wasn't that much functionality for agents which couldn't be done better in other ways. But a big part of it was problems with security. One issue was protecting the host from malicious agents, and much work was done in that direction. This was one of the early selling points of Java, and other sandbox systems were developed as well. Likewise the E language is designed to solve this problem. But the much harder problem was protecting the agent from malicious hosts. Once an agent transferred into a host machine, it was essentially at the mercy of that system. The host could lie to the agent, and even manipulate its memory and program, to make it do anything it desired. Without the ability to maintain its own integrity, the agent was relatively useless in many ecommerce applications. Various techniques were suggested to partially address this, such as splitting the agent functionality among multiple agents which would run on different machines, or using cryptographic methods for computing with encrypted instances and the like. But these were inherently so inefficient that any advantages mobile agents might have had were eliminated compared to such things as web services. Ideally you'd like your agent to truly be autonomous, with its own data, its own code, all protected from the host and other agents. It could even carry a store of electronic cash which it could use to fund its activities on the host machine. It could remember its interactions on earlier machines in an uncorruptable way. And you'd like it to run efficiently, without the enormous overheads of the cryptographic techniques. Superficially such a capability seems impossible. Agents can't have that kind of autonomy. But trusted computing can change this. It can give agents good protection as they move through the net. Imagine that host computers run a special program, an Agent Virtual Machine or AVM. This program runs the agents in their object language, and it respects each agent's code and data. It does not corrupt the agents, it does not manipulate or copy their memory without authorization from the agent itself. It allows the agents to act in the autonomous fashion we would desire. Without trusted computing, the problem of course is that there is no way to be sure that a potential host is running a legitimate version of the AVM. It could have a hacked AVM that would allow it to steal cash from the agents, change their memory, and worse. This is where trusted computing can solve the problem. It allows agents to verify that a remote system is running a legitimate AVM before transferring over. Hacked AVMs will have a different hash and this will be detected via the trusted computing mechanisms. Knowing that the remote machine is running a correct implementation of the AVM allows the agent to move about without being molested. In this way, trusted computing can solve one of the biggest problems with effective use of mobile agents. Trusted computing finally allows mobile agent technology to work right. This is just one of what I expect to be thousands of applications which can take advantage of the trusted computing concept. Once you have a whole world of people trying to think creatively about how to use this technology, rather than just a handful, there will be an explosion of new applications which today we would never dream are possible.
Re: Seth on TCPA at Defcon/Usenix
In discussing how TCPA would help enforce a document revocation list (DRL) Joseph Ashwood contrasted the situation with and without TCPA style hardware, below. I just want to point out that his analysis of the hardware vs software situation says nothing about DRL's specifically; in fact it doesn't even mention them. His analysis actually applies to a wide range of security features, such as the examples given earlier: secure games, improved P2P, distributed computing as Adam Back suggested, DRM of course, etc.. TCPA is a potentially very powerful security enhancement, so it does make sense that it can strengthen all of these things, and DRLs as well. But I don't see that it is fair to therefore link TCPA specifically with DRLs, when there are any number of other security capabilities that are also strengthened by TCPA. Joseph Ashwood wrote: > Actually it does, in order to make it valuable. Without a hardware assist, > the attack works like this: > Hack your software (which is in many ways almost trivial) to reveal it's > private key. > Watch the protocol. > Decrypt protocol > Grab decryption key > use decryption key > problem solved It's not always as easy as you make it sound here. Adam Back wrote Saturday about the interesting history of the giFT project, which reverse-engineered the Kazaa file-sharing protocol. That was a terrific effort that required considerable cryptographic know-how as well as supreme software reverse engineering skills. But then Kazaa changed the protocol, and giFT never managed to become compatible with the new one. I'm not sure whether it was lack of interest or just too difficult, but in any case the project failed (as far as creating an open Kazaa compatible client). It is clear that software hacking is far from "almost trivial" and you can't assume that every software-security feature can and will be broken. Furthermore, even when there is a break, it won't be available to everyone. Ordinary people aren't clued in to the hacker community and don't download all the latest patches and hacks to disable security features in their software. Likewise for business customers. In practice, if Microsoft wanted to implement a global, facist DRL, while some people might be able to patch around it, probably 95%+ of ordinary users would be stuck with it. Therefore a DRL in software would be far from useless, and if there truly was a strong commercial need for such a solution then chances are it would be there today. I might mention BTW that for email there is such a product, disappearingink.com, which works along the lines Seth suggested, I believe. It encrypts email with a centralized key, and when that email needs to be deleted, the key is destroyed. This allows corporations to implement a "document retention policy" (which is of course a euphemism for a document destruction policy) to help reduce their vulnerability to lawsuits and fishing expeditions. I don't recall anyone getting up in arms over the disappearingink.com technology or claiming that it was a threat, in the same way that DRLs and SNRLs are being presented in the context of Palladium. > With hardware assist, trusted software, and a trusted execution environment > it (doesn't) work like this: > Hack you software. > DOH! the software won't run > revert back to the stored software. > Hack the hardware (extremely difficult). > Virtualize the hardware at a second layer, using the grabbed private key > Hack the software > Watch the protocol. > Decrypt protocol > Grab decryption key > use decryption key > Once the file is released the server revokes all trust in your client, > effectively removing all files from your computer that you have not > decrypted yet > problem solved? only for valuable files First, as far as this last point, you acknowledge that if they can't tell where it came from, your hacked hardware can be an ongoing source of un-DRL'd documents. But watermarking technology so far has been largely a huge failure, so it is likely that someone clueful enough to hack his TPM could also strip away any identifying markings. Second, given that you do hack the hardware, you may not actually need to do that much in terms of protocol hacking. If you can watch the data going to and from the TPM you can extract keys directly, and that may be enough to let you decrypt the "sealed" data. (The TPM does only public key operations; the symmetric crypto is all done by the app. I don't know if Palladium will work that way or not.) Third, if a document is "liberated" via this kind of hack, it can then be distributed everywhere, outside the "secure trust perimeter" enforced by TCPA/Palladium. We are still in a "break once read anywhere" situation with documents, and any attempt to make one disappear is not going to be very successful, even with TCPA in existence. In short, while TCPA could increase the effectiveness of global DRLs, they wouldn't be *that* much more effective. Most users will neither hack
Re: dangers of TCPA/palladium
Mike Rosing wrote: > The difference is fundamental: I can change every bit of flash in my BIOS. > I can not change *anything* in the TPM. *I* control my BIOS. IF, and > only IF, I can control the TPM will I trust it to extend my trust to > others. The purpose of TCPA as spec'ed is to remove my control and > make the platform "trusted" to one entity. That entity has the master > key to the TPM. > > Now, if the spec says I can install my own key into the TPM, then yes, > it is a very useful tool. It would be fantastic in all the portables > that have been stolen from the FBI for example. Assuming they use a > password at turn on, and the TPM is used to send data over the net, > then they'd know where all their units are and know they weren't > compromised (or how badly compromised anyway). > > But as spec'ed, it is very seriously flawed. Ben Laurie replied: > Although the outcome _may_ be like this, your understanding of the TPM > is seriously flawed - it doesn't prevent your from running whatever you > want, but what it does do is allow a remote machine to confirm what you > have chosen to run. David Wagner commented: > I don't understand your objection. It doesn't look to me like Rosing > said anything incorrect. Did I miss something? > > It doesn't look like he ever claimed that TCPA directly prevents one from > running what you want to; rather, he claimed that its purpose (or effect) > is to reduce his control, to the benefit of others. His claims appear > to be accurate, according to the best information I've seen. I don't believe that is an accurate paraphrase of what Mike Rosing said. He said the purpose (not effect) was to remove (not reduce) his control, and make the platform trusted to one entity (not "for the benefit of others"). Unless you want to defend the notion that the purpose of TCPA is to *remove* user control of his machine, and make it trusted to only *one other entity* (rather than a general capability for remote trust), then I think you should accept that what he said was wrong. And Mike said more than this. He said that if he could install his own key into the TPM that would make it a very useful tool. This is wrong; it would completely undermine the trust guarantees of TCPA, make it impossible for remote observers to draw any useful conclusions about the state of the system, and render the whole thing useless. He also talked about how this could be used to make systems "phone home" at boot time. But TCPA has nothing to do with any such functionality as this. In contrast, Ben Laurie's characterization of TCPA is 100% factual and accurate. Do you at least agree with that much, even if you disagree with my criticism of Mike Rosing's comments?
Re: Palladium: technical limits and implications
Adam Back writes: > +---++ > | trusted-agent | user mode | > |space | app space | > |(code ++ > | compartment) | supervisor | > | | mode / OS | > +---++ > | ring -1 / TOR | > ++ > | hardware / SCP key manager | > ++ I don't think this works. According to Peter Biddle, the TOR can be launched even days after the OS boots. It does not underly the ordinary user mode apps and the supervisor mode system call handlers and device drivers. +---++ | trusted-agent | user mode | |space | app space | |(code ++ | compartment) | supervisor | | | mode / OS | +---+ +---++ |SCP|---| ring -1 / TOR | +---+ +---+ This is more how I would see it. The SCP is more like a peripheral device, a crypto co-processor, that is managed by the TOR. Earlier you quoted Seth's blog: | The nub is a kind of trusted memory manager, which runs with more | privilege than an operating system kernel. The nub also manages access | to the SCP. as justification for putting the nub (TOR) under the OS. But I think in this context "more privilege" could just refer to the fact that it is in the secure memory, which is only accessed by this ring--1 or ring-0 or whatever you want to call it. It doesn't follow that the nub has anything to do with the OS proper. If the OS can run fine without it, as I think you agreed, then why would the entire architecture have to reorient itself once the TOR is launched? In other words, isn't my version simpler, as it adjoins the column at the left to the pre-existing column at the right, when the TOR launches, days after boot? Doesn't it require less instantaneous, on-the-fly, reconfiguration of the entire structure of the Windows OS at the moment of TOR launch? And what, if anything, does my version fail to accomplish that we know that Palladium can do? > Integrity Metrics in a given level are computed by the level below. > > The TOR starts Trusted Agents, the Trusted Agents are outside the OS > control. Therefore a remote application based on remote attestation > can know about the integrity of the trusted-agent, and TOR. > > ring -1/TOR is computed by SCP/hardware; Trusted Agent is computed by > TOR; I had thought the hardware might also produce the metrics for trusted agents, but you could be right that it is the TOR which does so. That would be consistent with the "incremental extension of trust" philosophy which many of these systems seem to follow. > The parallel stack to the right: OS is computed by TOR; Application is > computed OS. No, that doesn't make sense. Why would the TOR need to compute a metric of the OS? Peter has said that Palladium does not give information about other apps running on your machine: : Note that in Pd no one but the user can find out the totality of what SW is : running except for the nub (aka TOR, or trusted operating root) and any : required trusted services. So a service could say "I will only communicate : with this app" and it will know that the app is what it says it is and : hasn't been perverted. The service cannot say "I won't communicate with this : app if this other app is running" because it has no way of knowing for sure : if the other app isn't running. > So for general applications you still have to trust the OS, but the OS > could itself have it's integrity measured by the TOR. Of course given > the rate of OS exploits especially in Microsoft products, it seems > likley that the aspect of the OS that checks integrity of loaded > applications could itself be tampered with using a remote exploit. Nothing Peter or anyone else has said indicates that this is a property of Palladium, as far as I can remember. > Probably the latter problem is the reason Microsoft introduced ring -1 > in palladium (it seems to be missing in TCPA). No, I think it is there to prevent debuggers and supervisor-mode drivers from manipulating secure code. TCPA is more of a whole-machine spec dealing with booting an OS, so it doesn't have to deal with the question of running secure code next to insecure code.
Re: responding to claims about TCPA
David Wagner wrote: > To respond to your remark about bias: No, bringing up Document Revocation > Lists has nothing to do with bias. It is only right to seek to understand > the risks in advance. I don't understand why you seem to insinuate > that bringing up the topic of Document Revocation Lists is an indication > of bias. I sincerely hope that I misunderstood you. I believe you did, because if you look at what I actually wrote, I did not say that "bringing up the topic of DRLs is an indication of bias": > The association of TCPA with SNRLs is a perfect example of the bias and > sensationalism which has surrounded the critical appraisals of TCPA. > I fully support John's call for a fair and accurate evaluation of this > technology by security professionals. But IMO people like Ross Anderson > and Lucky Green have disqualified themselves by virtue of their wild and > inaccurate public claims. Anyone who says that TCPA has SNRLs is making > a political statement, not a technical one. My core claim is the last sentence. It's one thing to say, as you are, that TCPA could make applications implement SNRLs more securely. I believe that is true, and if this statement is presented in the context of "dangers of TCPA" or something similar, it would be appropriate. But even then, for a fair analysis, it should make clear that SNRLs can be done without TCPA, and it should go into some detail about just how much more effective a SNRL system would be with TCPA. (I will write more about this in responding to Joseph Ashwood.) And to be truly unbiased, it should also talk about good uses of TCPA. If you look at Ross Anderson's TCPA FAQ at http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html, he writes (question 4): : When you boot up your PC, Fritz takes charge. He checks that the boot : ROM is as expected, executes it, measures the state of the machine; : then checks the first part of the operating system, loads and executes : it, checks the state of the machine; and so on. The trust boundary, of : hardware and software considered to be known and verified, is steadily : expanded. A table is maintained of the hardware (audio card, video card : etc) and the software (O/S, drivers, etc); Fritz checks that the hardware : components are on the TCPA approved list, that the software components : have been signed, and that none of them has a serial number that has : been revoked. He is not saying that TCPA could make SNRLs more effective. He says that "Fritz checks... that none of [the software components] has a serial number that has been revoked." He is flatly stating that the TPM chip checks a serial number revocation list. That is both biased and factually untrue. Ross's whole FAQ is incredibly biased against TCPA. I don't see how anyone can fail to see that. If it were titled "FAQ about Dangers of TCPA" at least people would be warned that they were getting a one-sided presentation. But it is positively shameful for a respected security researcher like Ross Anderson to pretend that this document is giving an unbiased and fair description. I would be grateful if someone who disagrees with me, who thinks that Ross's FAQ is fair and even-handed, would speak up. It amazes me that people can see things so differently. And Lucky's slide presentation, http://www.cypherpunks.to, is if anything even worse. I already wrote about this in detail so I won't belabor the point. Again, I would be very curious to hear from someone who thinks that his presentation was unbiased.
Re: responding to claims about TCPA
AARG! wrote: > I asked Eric Murray, who knows something about TCPA, what he thought > of some of the more ridiculous claims in Ross Anderson's FAQ (like the > SNRL), and he didn't respond. I believe it is because he is unwilling > to publicly take a position in opposition to such a famous and respected > figure. John Gilmore replied: > > Many of the people who "know something about TCPA" are constrained > by NDA's with Intel. Perhaps that is Eric's problem -- I don't know. Maybe, but he could reply just based on public information. Despite this he was unable or unwilling to challenge Ross Anderson. > One of the things I told them years ago was that they should draw > clean lines between things that are designed to protect YOU, the > computer owner, from third parties; versus things that are designed to > protect THIRD PARTIES from you, the computer owner. This is so > consumers can accept the first category and reject the second, which, > if well-informed, they will do. I don't agree with this distinction. If I use a smart card chip that has a private key on it that won't come off, is that protecting me from third parties, or vice versa? If I run a TCPA-enhanced Gnutella that keeps the RIAA from participating and easily finding out who is running supernodes (see http://slashdot.org/article.pl?sid=02/08/09/2347245 for the latest crackdown), I benefit, even though the system technically is protecting the data from me. I wrote earlier that if people were honest, trusted computing would not be necessary, because they would keep their promises. Trusted computing allows people to prove to remote users that they will behave honestly. How does that fit into your dichotomy? Society has evolved a myriad mechanisms to allow people to give strong evidence that they will keep their word; without them, trade and commerce would be impossible. By your logic, these protect third parties from you, and hence should be rejected. You would discard the economic foundation for our entire world. > TCPA began in that "protect third parties from the owner" category, > and is apparently still there today. You won't find that out by > reading Intel's modern public literature on TCPA, though; it doesn't > admit to being designed for, or even useful for, DRM. My guess is > that they took my suggestion as marketing advice rather than as a > design separation issue. "Pitch all your protect-third-party products > as if they are protect-the-owner products" was the opposite of what I > suggested, but it's the course they (and the rest of the DRM industry) > are on. E.g. see the July 2002 TCPA faq at: > > http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf > > 3. Is the real "goal" of TCPA to design a TPM to act as a DRM or > Content Protection device? > No. The TCPA wants to increase the trust ... [blah blah blah] > > I believe that "No" is a direct lie. David Grawrock of Intel has an interesting slide presentation on TCPA at http://www.intel.com/design/security/tcpa/slides/index.htm. His slide 3 makes a good point: "All 5 members had very different ideas of what should and should not be added." It's possible that some of the differences in perspective and direction on TCPA are due to the several participants wanting to move in different ways. Some may have been strictly focused on DRM; others may have had a more expansive vision of how trust can benefit all kinds of distributed applications. So it's not clear that you can speak of the "real goal" of TCPA, when there are all these different groups with different ideas. > Intel has removed the first > public version 0.90 of the TCPA spec from their web site, but I have > copies, and many of the examples in the mention DRM, e.g.: > > http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf (still there) > > This TCPA white paper says that the goal is "ubiquity". Another way to > say that is monopoly. Nonsense. The web is ubiquitous, but is not a monopoly. > The idea is to force any other choices out of > the market, except the ones that the movie & record companies want. > The first "scenario" (PDF page 7) states: "For example, before making > content available to a subscriber, it is likely that a service > provider will need to know that the remote platform is trustworthy." That same language is in the Credible Interoperability document presently on the web site at http://www.trustedcomputing.org/docs/Credible_Interoperability_020702.pdf. So I don't think there is necessarily any kind of a cover-up here. > http://www.trustedpc.org/home/pdf/spec0818.pdf (gone now) > > Even this 200-page TCPA-0.90 specification, which is carefully written > to be obfuscatory and misleading, leaks such gems as: "These features > encourage third parties to grant access to by the platform to > information that would otherwise be denied to the platform" (page 14). > "The 'protected store' feature...can hold and manipulate confidential > data, and will allow t
Seth on TCPA at Defcon/Usenix
Seth Schoen of the EFF has a good blog entry about Palladium and TCPA at http://vitanuova.loyalty.org/2002-08-09.html. He attended Lucky's presentation at DEF CON and also sat on the TCPA/Palladium panel at the USENIX Security Symposium. Seth has a very balanced perspective on these issues compared to most people in the community. It makes me proud to be an EFF supporter (in fact I happen to be wearing my EFF T-shirt right now). His description of how the Document Revocation List could work is interesting as well. Basically you would have to connect to a server every time you wanted to read a document, in order to download a key to unlock it. Then if "someone" decided that the document needed to un-exist, they would arrange for the server no longer to download that key, and the document would effectively be deleted, everywhere. I think this clearly would not be a feature that most people would accept as an enforced property of their word processor. You'd be unable to read things unless you were online, for one thing. And any document you were relying on might be yanked away from you with no warning. Such a system would be so crippled that if Microsoft really did this for Word, sales of "vi" would go through the roof. It reminds me of an even better way for a word processor company to make money: just scramble all your documents, then demand ONE MILLION DOLLARS for the keys to decrypt them. The money must be sent to a numbered Swiss account, and the software checks with a server to find out when the money has arrived. Some of the proposals for what companies will do with Palladium seem about as plausible as this one. Seth draws an analogy with Acrobat, where the paying customers are actually the publishers, the reader being given away for free. So Adobe does have incentives to put in a lot of DRM features that let authors control publication and distribution. But he doesn't follow his reasoning to its logical conclusion when dealing with Microsoft Word. That program is sold to end users - people who create their own documents for the use of themselves and their associates. The paying customers of Microsoft Word are exactly the ones who would be screwed over royally by Seth's scheme. So if we "follow the money" as Seth in effect recommends, it becomes even more obvious that Microsoft would never force Word users to be burdened with a DRL feature. And furthermore, Seth's scheme doesn't rely on TCPA/Palladium. At the risk of aiding the fearmongers, I will explain that TCPA technology actually allows for a much easier implementation, just as it does in so many other areas. There is no need for the server to download a key; it only has to download an updated DRL, and the Word client software could be trusted to delete anything that was revoked. But the point is, Seth's scheme would work just as well today, without TCPA existing. As I quoted Ross Anderson saying earlier with regard to "serial number revocation lists", these features don't need TCPA technology. So while I have some quibbles with Seth's analysis, on the whole it is the most balanced that I have seen from someone who has no connection with the designers (other than my own writing, of course). A personal gripe is that he referred to Lucky's "critics", plural, when I feel all alone out here. I guess I'll have to start using the royal "we". But he redeemed himself by taking mild exception to Lucky's slide show, which is a lot farther than anyone else has been willing to go in public.
Re: Challenge to TCPA/Palladium detractors
Re the debate over whether compilers reliably produce identical object (executable) files: The measurement and hashing in TCPA/Palladium will probably not be done on the file itself, but on the executable content that is loaded into memory. For Palladium it is just the part of the program called the "trusted agent". So file headers with dates, compiler version numbers, etc., will not be part of the data which is hashed. The only thing that would really break the hash would be changes to the compiler code generator that cause it to create different executable output for the same input. This might happen between versions, but probably most widely used compilers are relatively stable in that respect these days. Specifying the compiler version and build flags should provide good reliability for having the executable content hash the same way for everyone.
Re: Thanks, Lucky, for helping to kill gnutella
Several people have objected to my point about the anti-TCPA efforts of Lucky and others causing harm to P2P applications like Gnutella. Eric Murray wrote: > Depending on the clients to "do the right thing" is fundamentally > stupid. Bran Cohen agrees: > Before claiming that the TCPA, which is from a deployment standpoint > vaporware, could help with gnutella's scaling problems, you should > probably learn something about what gnutella's problems are first. The > truth is that gnutella's problems are mostly that it's a screamer > protocol, and limiting which clients could connect would do nothing to fix > that. I will just point out that it was not my idea, but rather that Salon said that the Gnutella developers were considering moving to authorized clients. According to Eric, those developers are "fundamentally stupid." According to Bram, the Gnutella developers don't understand their own protocol, and they are supporting an idea which will not help. Apparently their belief that clients like Qtrax are hurting the system is totally wrong, and keeping such clients off the system won't help. I can't help believing the Gnutella developers know more about their own system than Bram and Eric do. If they disagree, their argument is not with me, but with the Gnutella people. Please take it there. Ant chimes in: > My copy of "Peer to Peer" (Oram, O'Reilly) is out on loan but I think Freenet > and Mojo use protocols that require new users to be contributors before they > become consumers. Pete Chown echoes: > If you build a protocol which allows selfish behaviour, you have done > your job badly. Preventing selfish behaviour in distributed systems is > not easy, but that is the problem we need to solve. It would be a good > discussion for this list. As far as Freenet and MojoNation, we all know that the latter shut down, probably in part because the attempted traffic-control mechanisms made the whole network so unwieldy that it never worked. At least in part this was also due to malicious clients, according to the analysis at http://www.cs.rice.edu/Conferences/IPTPS02/188.pdf. And Freenet has been rendered inoperative in recent months by floods. No one knows whether they are fundamental protocol failings, or the result of selfish client strategies, or calculated attacks by the RIAA and company. Both of these are object lessons in the difficulties of successful P2P networking in the face of arbitrary client attacks. Some people took issue with the personal nature of my criticism: > Your personal vendetta against Lucky is very childish. > This sort of attack doesn't do your position any good. Right, as if my normal style has been so effective. Not one person has given me the least support in my efforts to explain the truth about TCPA and Palladium. Anyway, maybe I was too personal in singling out Lucky. He is far from the only person who has opposed TCPA. But Lucky, in his slides at http://www.cypherpunks.to, claims that TCPA's designers had as one of their objectives "To meet the operational needs of law enforcement and intelligence services" (slide 2); and to give privileged access to user's computers to "TCPA members only" (slide 3); that TCPA has an OS downloading a "serial number revocation list" (SNRL) which he has provided no evidence for whatsoever (slide 14); that it loads an "initial list of undesirable applications" which is apparently another of his fabrications (slide 15); that TCPA applications on startup load both a serial number revocation list but also a document revocation list, again a completely unsubstantiated claim (slide 19); that apps then further verify that spyware is running, another fabrication (slide 20). He then implies that the DMCA applies to reverse engineering when it has an explicit exemption for that (slide 23); that the maximum possible sentence of 5 years is always applied (slide 24); that TCPA is intended to: defeat the GPL, enable information invalidation, facilitate intelligence collection, meet law enforcement needs, and more (slide 27); that only signed code will boot in TCPA, contrary to the facts (slide 28). He provides more made-up details about the mythical DRL (slide 31); more imaginary details about document IDs, information monitoring and invalidation to support law enforcement and intelligence needs, none of which has anything to do with TCPA (slide 32-33). As apparent support for these he provides an out-of-context quote[1] from a Palladium manager, who if you read the whole article was describing their determination to keep the system open (slide 34). He repeats the unfounded charge that the Hollings bill would mandate TCPA, when there's nothing in the bill that says such a thing (slide 35); and he exaggerates the penalties in that bill by quoting the maximum limits as if they are the default (slide 36). Lucky can provide all this misinformation, all under the pretence, mind you, that this *is* TCPA. He was educating the audience, most
Re: TCPA/Palladium -- likely future implications
I want to follow up on Adam's message because, to be honest, I missed his point before. I thought he was bringing up the old claim that these systems would "give the TCPA root" on your computer. Instead, Adam is making a new point, which is a good one, but to understand it you need a true picture of TCPA rather than the false one which so many cypherpunks have been promoting. Earlier Adam offered a proposed definition of TCPA/Palladium's function and purpose: > "Palladium provides an extensible, general purpose programmable > dongle-like functionality implemented by an ensemble of hardware and > software which provides functionality which can, and likely will be > used to expand centralised control points by OS vendors, Content > Distrbuters and Governments." IMO this is total bullshit, political rhetoric that is content-free compared to the one I offered: : Allow computers separated on the internet to cooperate and share data : and computations such that no one can get access to the data outside : the limitations and rules imposed by the applications. It seems to me that my definition is far more useful and appropriate in really understanding what TCPA/Palladium are all about. Adam, what do you think? If we stick to my definition, you will come to understand that the purpose of TCPA is to allow application writers to create closed spheres of trust, where the application sets the rules for how the data is handled. It's not just DRM, it's Napster and banking and a myriad other applications, each of which can control its own sensitive data such that no one can break the rules. At least, that's the theory. But Adam points out a weak spot. Ultimately applications trust each other because they know that the remote systems can't be virtualized. The apps are running on real hardware which has real protections. But applications know this because the hardware has a built-in key which carries a certificate from the manufacturer, who is called the TPME in TCPA. As the applications all join hands across the net, each one shows his cert (in effect) and all know that they are running on legitimate hardware. So the weak spot is that anyone who has the TPME key can run a virtualized TCPA, and no one will be the wiser. With the TPME key they can create their own certificate that shows that they have legitimate hardware, when they actually don't. Ultimately this lets them run a rogue client that totally cheats, disobeys all the restrictions, shows the user all of the data which is supposed to be secret, and no one can tell. Furthermore, if people did somehow become suspicious about one particular machine, with access to the TPME key the eavesdroppers can just create a new virtual TPM and start the fraud all over again. It's analogous to how someone with Verisign's key could masquerade as any secure web site they wanted. But it's worse because TCPA is almost infinitely more powerful than PKI, so there is going to be much more temptation to use it and to rely on it. Of course, this will be inherently somewhat self-limiting as people learn more about it, and realize that the security provided by TCPA/Palladium, no matter how good the hardware becomes, will always be limited to the political factors that guard control of the TPME keys. (I say keys because likely more than one company will manufacture TPM's. Also in TCPA there are two other certifiers: one who certifies the motherboard and computer design, and the other who certifies that the board was constructed according to the certified design. The NSA would probably have to get all 3 keys, but this wouldn't be that much harder than getting just one. And if there are multiple manufacturers then only 1 key from each of the 3 categories is needed.) To protect against this, Adam offers various solutions. One is to do crypto inside the TCPA boundary. But that's pointless, because if the crypto worked, you probably wouldn't need TCPA. Realistically most of the TCPA applications can't be cryptographically protected. "Computing with encrypted instances" is a fantasy. That's why we don't have all those secure applications already. Another is to use a web of trust to replace or add to the TPME certs. Here's a hint. Webs of trust don't work. Either they require strong connections, in which case they are too sparse, or they allow weak connections, in which case they are meaningless and anyone can get in. I have a couple of suggestions. One early application for TCPA is in closed corporate networks. In that case the company usually buys all the computers and prepares them before giving them to the employees. At that time, the company could read out the TPM public key and sign it with the corporate key. Then they could use that cert rather than the TPME cert. This would protect the company's sensitive data against eavesdroppers who manage to virtualize their hardware. For the larger public network, the first thing I would suggest is that the TPME
[no subject]
Adam Back writes a very thorough analysis of possible consequences of the amazing power of the TCPA/Palladium model. He is clearly beginning to "get it" as far as what this is capable of. There is far more to this technology than simple DRM applications. In fact Adam has a great idea for how this could finally enable selling idle CPU cycles while protecting crucial and sensitive business data. By itself this could be a "killer app" for TCPA/Palladium. And once more people start thinking about how to exploit the potential, there will be no end to the possible applications. Of course his analysis is spoiled by an underlying paranoia. So let me ask just one question. How exactly is subversion of the TPM a greater threat than subversion of your PC hardware today? How do you know that Intel or AMD don't already have back doors in their processors that the NSA and other parties can exploit? Or that Microsoft doesn't have similar backdoors in its OS? And similarly for all the other software and hardware components that make up a PC today? In other words, is this really a new threat? Or are you unfairly blaming TCPA for a problem which has always existed and always will exist?
Thanks, Lucky, for helping to kill gnutella
An article on Salon this morning (also being discussed on slashdot), http://www.salon.com/tech/feature/2002/08/08/gnutella_developers/print.html, discusses how the file-trading network Gnutella is being threatened by misbehaving clients. In response, the developers are looking at limiting the network to only authorized clients: > On Gnutella discussion sites, programmers are discussing a number of > technical proposals that would make access to the network contingent > on good behavior: If you write code that hurts Gnutella, in other > words, you don't get to play. One idea would allow only "clients that > you can authenticate" to speak on the network, Fisk says. This would > include the five-or-so most popular Gnutella applications, including > "Limewire, BearShare, Toadnode, Xolox, Gtk-Gnutella, and Gnucleus." If > new clients want to join the group, they would need to abide by a certain > communication specification. They intend to do this using digital signatures, and there is precedent for this in past situations where there have been problems: > Alan Cox, a veteran Linux developer, says that he's seen this sort of > debate before, and he's not against a system that keeps out malicious > users using technology. "Years and years ago this came up with a game > called Xtrek," Cox says. People were building clients with unfair > capabilities to play the space game -- and the solution, says Cox, > was to introduce digital signatures. "Unless a client has been signed, > it can't play. You could build any client you wanted, but what you can't > do is build an Xtrek client that let you play better." Not discussed in the article is the technical question of how this can possibly work. If you issue a digital certificate on some Gnutella client, what stops a different client, an unauthorized client, from pretending to be the legitimate one? This is especially acute if the authorized client is open source, as then anyone can see the cert, see exactly what the client does with it, and merely copy that behavior. If only there were a technology in which clients could verify and yes, even trust, each other remotely. Some way in which a digital certificate on a program could actually be verified, perhaps by some kind of remote, trusted hardware device. This way you could know that a remote system was actually running a well-behaved client before admitting it to the net. This would protect Gnutella from not only the kind of opportunistic misbehavior seen today, but the future floods, attacks and DOSing which will be launched in earnest once the content companies get serious about taking this network down. If only... Luckily the cypherpunks are doing all they can to make sure that no such technology ever exists. They will protect us from being able to extend trust across the network. They will make sure that any open network like Gnutella must forever face the challenge of rogue clients. They will make sure that open source systems are especially vulnerable to rogues, helping to drive these projects into closed source form. Be sure and send a note to the Gnutella people reminding them of all you're doing for them, okay, Lucky?
Re: Challenge to TCPA/Palladium detractors
Anon wrote: > You could even have each participant compile the program himself, > but still each app can recognize the others on the network and > cooperate with them. Matt Crawford replied: > Unless the application author can predict the exact output of the > compilers, he can't issue a signature on the object code. The > compilers then have to be inside the trusted base, checking a > signature on the source code and reflecting it somehow through a > signature they create for the object code. It's likely that only a limited number of compiler configurations would be in common use, and signatures on the executables produced by each of those could be provided. Then all the app writer has to do is to tell people, get compiler version so-and-so and compile with that, and your object will match the hash my app looks for. DEI
Re: Other uses of TCPA
Mike Rosing wrote: > Who owns PRIVEK? Who controls PRIVEK? That's who own's TCPA. PRIVEK, the TPM's private key, is generated on-chip. It never leaves the chip. No one ever learns its value. Given this fact, who would you say owns and controls it? > And then there was this comment in yet another message: > > >In addition, we assume that programs are able to run "unmolested"; > >that is, that other software and even the user cannot peek into the > >program's memory and manipulate it or learn its secrets. Palladium has > >a feature called "trusted space" which is supposed to be some special > >memory that is immune from being compromised. We also assume that > >all data sent between computers is encrypted using something like SSL, > >with the secret keys being held securely by the client software (hence > >unavailable to anyone else, including the users). > > Just how "immune" is this program space? Does the operator/owner of > the machine control it, or does the owner of PRIVEK control it? Not much information is provided about this feature in the Palladium white paper. From what I understand, no one is able to manipulate the program when it is in this trusted space, not the machine owner, nor any external party. Only the program is in control. > So > the owner of PRIVEK can send a trojan into my machine and take it over > anytime they want. Cool, kind of like the movie "Collosis" where a > super computer takes over the world. No, for several reasons. First, PRIVEK doesn't really have an owner in the sense you mean. It is more like an autonomous agent. Second, the PRIVEK stuff is part of the TCPA spec, while the trusted space is from Palladium, and they don't seem to have much to do with each other. And last, just because a program can run without interference, it is a huge leap to infer that anyone can put a trojan onto your machine. > The more I learn about TCPA, the more I don't like it. No one has said anything different despite the 40+ messages I have sent on this topic. Is this because TCPA is that bad, or is it because everyone is stubborn? Look, I just showed that all these bad things you thought about TCPA were wrong. The PRIVEK is not controlled by someone else, it does not own the trusted space, and it allows no one to put a trojan onto your machine. But you won't now say that TCPA is OK, will you? You just learned some information which objectively should make you feel less bad about it, and yet you either don't feel that way, or you won't admit it. I am coming to doubt that people's feelings and beliefs about TCPA are based on facts at all. No matter how much I correct negative misconceptions about these systems, no one will admit to having any more positive feelings about it.
Privacy-enhancing uses for TCPA
Here are some alternative applications for TCPA/Palladium technology which could actually promote privacy and freedom. A few caveats, though: they do depend on a somewhat idealized view of the architecture. It may be that real hardware/software implementations are not sufficiently secure for some of these purposes, but as systems become better integrated and more technologically sound, this objection may go away. And these applications do assume that the architecture is implemented without secret backdoors or other intentional flaws, which might be guaranteed through an open design process and manufacturing inspections. Despite these limitations, hopefully these ideas will show that TCPA and Palladium actually have many more uses than the heavy-handed and control-oriented ones which have been discussed so far. To recap, there are basically two technologies involved. One is "secure attestation". This allows machines to securely receive a hash of the software which is running remotely. It is used in these examples to know that a trusted client program is running on the remote machine. The other is "secure storage". This allows programs to encrypt data in such a way that no other program can decrypt it. In addition, we assume that programs are able to run "unmolested"; that is, that other software and even the user cannot peek into the program's memory and manipulate it or learn its secrets. Palladium has a feature called "trusted space" which is supposed to be some special memory that is immune from being compromised. We also assume that all data sent between computers is encrypted using something like SSL, with the secret keys being held securely by the client software (hence unavailable to anyone else, including the users). The effect of these technologies is that a number of computers across the net, all running the same client software, can form their own closed virtual world. They can exchange and store data of any form, and no one can get access to it unless the client software permits it. That means that the user, eavesdroppers, and authorities are unable to learn the secrets protected by software which uses these TCPA features. (Note, in the sequel I will just write TCPA when I mean TCPA/Palladium.) Now for a simple example of what can be done: a distributed poker game. Of course there are a number of crypto protocols for playing poker on the net, but they are quite complicated. Even though they've been around for almost 20 years, I've never seen game software which uses them. With TCPA we can do it trivially. Each person runs the same client software, which fact can be tested using secure attestation. The dealer's software randomizes a deck and passes out the cards to each player. The cards are just strings like "ace of spades", or perhaps simple numerical equivalents - nothing fancy. Of course, the dealer's software learns in this way what cards every player has. But the dealer himself (i.e. the human player) doesn't see any of that, he only sees his own hand. The software keeps the information secret from the user. As each person makes his play, his software sends simple messages telling what cards he is exposing or discarding, etc. At the end each person sends messages showing what his hand is, according to the rules of poker. This is a trivial program. You could do it in one or two pages of code. And yet, given the TCPA assumptions, it is just as secure as a complex cryptographically protected version would be that takes ten times as much code. Of course, without TCPA such a program would never work. Someone would write a cheating client which would tell them what everyone else's cards were when they were the dealer. There would be no way that people could trust each other not to do this. But TCPA lets people prove to each other that they are running the legitimate client. So this is a simple example of how the secure attestation features of TCPA/Palladium can allow a kind of software which would never work today, software where people trust each other. Let's look at another example, a P2P system with anonymity. Again, there are many cryptographic systems in the literature for anonymous communication. But they tend to be complicated and inefficient. With TCPA we only need to set up a simple flooding broadcast network. Let each peer connect to a few other peers. To prevent traffic analysis, keep each node-to-node link at a constant traffic level using dummy padding. (Recall that each link is encrypted using SSL.) When someone sends data, it gets sent everywhere via a simple routing strategy. The software then makes the received message available to the local user, if he is the recipient. Possibly the source of the message is carried along with it, to help with routing; but this information is never leaked outside the secure communications part of the software, and never shown to any users. That's all there is to it. Just send messages with flood broadcasts
Re: Other uses of TCPA
James Donald writes: > James Donald writes: > > > I can only see one application for voluntary TCPA, and that is > > > the application it was designed to perform: Make it possible > > > run software or content which is encrypted so that it will > > > only run on one computer for one time period. > > On 3 Aug 2002 at 20:10, Nomen Nescio wrote: > > For TCPA, you'd have to have the software as a blob which is > > encrypted to some key that is locked in the TPM. But the > > problem is that the endorsement key is never leaked except to > > the Privacy CA > > (Lots of similarly untintellible stuff deleted) > > You have lost me, I have no idea why you think what you are > talking about might be relevant to my assertion. I'm sorry, I'm just using the language and data structures from TCPA to try to understand how your assertion could relate to it. If you are making a claim about TCPA, perhaps you could express it in terms of those specific features which are supported by TCPA. > The TPM has its own secret key, it makes the corresponding public > key widely available to everyone, and its own internal good known > time. No, the TPM public key is not widely available to everyone. In fact, believe it or not, it is a relatively closely held secret. This is because the public key is in effect a unique identifier like the Intel processor ID number, and we all know what a firestorm that caused. Intel is paranoid about being burned again, so they have created a very elaborate system in which the TPM's public key is exposed only as narrowly as possible. The TPM public key is called the Endorsement key - this is the key which is signed by the manufacturer and which proves that the TPM is a valid implementation of TCPA. Here is what section 9.2 of the TCPA spec says about it: : A TPM only has one asymmetric endorsement key pair. Due to the nature of : this key pair, both the public and private parts of the key have privacy : and security concerns. : : Exporting the PRIVEK from the TPM must not occur. This is for security : reasons. The PRIVEK is a decryption key and never performs any signature : operations. : : Exporting the public PUBEK from the TPM under controlled circumstances : is allowable. Access to the PUBEK must be restricted to entities that : have a "need to know." This is for privacy reasons. The PUBEK is the public part of the TPM key and is not supposed to be widely available. It is only for those who have a "need to know", which definitely does not include everyone who would like to send some software to the system. In fact, it is only sent to Privacy CAs, which use it to encrypt a cert on a transient key that will be widely exposed. But I'm sorry, I'm going unintelligible again, aren't I? Also, nothing in the TCPA standard refers to securely knowing the time. Section 10.7 says "There is no requirement for a clock function in the TPM", so the date/time info comes from the normal, insecure hardware clock. > So when your customer's payment goes through, you then > send him a copy of your stuff encrypted to his TPM, a copy which > only his TPM can make use of. Your code, which the TPM decrypts > and executes, looks at the known good time, and if the user is > out of time, refuses to play. Well, without using any jargon, I will only say that TCPA doesn't work like this, and if you don't believe me, you will have to read the spec and verify it for yourself.
RE: Challenge to David Wagner on TCPA
Mike Rosing wrote: > On Fri, 2 Aug 2002, AARG! Anonymous wrote: > > > You don't have to send your data to Intel, just a master storage key. > > This key encrypts the other keys which encrypt your data. Normally this > > master key never leaves your TPM, but there is this optional feature > > where it can be backed up, encrypted to the manufacturer's public key, > > for recovery purposes. I think it is also in blinded form. > > In other words, the manufacturer has access to all your data because > they have the master storage key. > > Why would everyone want to give one manufacturer that much power? It's not quite that bad. I mentioned the blinding. What happens is that before the master storage key is encrypted, it is XOR'd with a random value, which is also output by the TPM along with the encrypted recovery blob. You save them both, but only the encrypted blob gets sent to the manufacturer. So when the manufacturer decrypts the data, he doesn't learn your secrets. The system is cumbersome, but not an obvious security leak.
RE: Challenge to David Wagner on TCPA
Peter Trei envisions data recovery in a TCPA world: > HoM: I want to recover my data. > Me: OK: We'll pull the HD, and get the data off it. > HoM: Good - mount it as a secondary HD in my new system. > Me: That isn't going to work now we have TCPA and Palladium. > HoM: Well, what do you have to do? > Me: Oh, it's simple. We encrypt the data under Intel's TPME key, > and send it off to Intel. Since Intel has all the keys, they can > unseal all your data to plaintext, copy it, and then re-seal it for > your new system. It only costs $1/Mb. > HoM: Let me get this straight - the only way to recover this data is > to let > Intel have a copy, AND pay them for it? > Me: Um... Yes. I think MS might be involved as well, if your were > using > Word. > HoM: You are *so* dead. It's not quite as bad as all this, but it is still pretty bad. You don't have to send your data to Intel, just a master storage key. This key encrypts the other keys which encrypt your data. Normally this master key never leaves your TPM, but there is this optional feature where it can be backed up, encrypted to the manufacturer's public key, for recovery purposes. I think it is also in blinded form. Obviously you'd need to do this backup step before the TPM crashed; afterwards is too late. So maybe when you first get your system it generates the on-chip storage key (called the SRK, storage root key), and then exports the recovery blob. You'd put that on a floppy or some other removable medium and store it somewhere safe. Then when your system dies you pull out the disk and get the recovery blob. You communicate with the manufacturer, give him this recovery blob, along with the old TPM key and the key to your new TPM in the new machine. The manufacturer decrypts the blob and re-encrypts it to the TPM in the new machine. It also issues and distributes a CRL revoking the cert on the old TPM key so that the old machine can't be used to access remote TCPA data any more. (Note, the CRL is not used by the TPM itself, it is just used by remote servers to decide whether to believe client requests.) The manufacturer sends the data back to you and you load it into the TPM in your new machine, which decrypts it and stores the master storage key. Now it can read your old data. Someone asked if you'd have to go through all this if you just upgraded your OS. I'm not sure. There are several secure registers on the TPM, called PCRs, which can hash different elements of the BIOS, OS, and other software. You can lock a blob to any one of these registers. So in some circumstances it might be that upgrading the OS would keep the secure data still available. In other cases you might have to go through some kind of recovery procedure. I think this recovery business is a real Achilles heel of the TCPA and Palladium proposals. They are paranoid about leaking sealed data, because the whole point is to protect it. So they can't let you freely copy it to new machines, or decrypt it from an insecure OS. This anal protectiveness is inconsistent with the flexibility needed in an imperfect world where stuff breaks. My conclusion is that the sealed storage of TCPA will be used sparingly. Ross Anderson and others suggest that Microsoft Word will seal all of its documents so that people can't switch to StarOffice. I think that approach would be far too costly and risky, given the realities I have explained above. Instead, I would expect that only highly secure data would be sealed, and that there would often be some mechanism to recover it from elsewhere. For example, in a DRM environment, maybe the central server has a record of all the songs you have downloaded. Then if your system crashes, rather than go through a complicated crypto protocol to recover, you just buy a new machine, go to the server, and re-download all the songs you were entitled to. Or in a closed environment, like a business which seals sensitive documents, the data could be backed up redundantly to multiple central file servers, each of which seal it. Then if one machine crashes, the data is available from others and there is no need to go through the recovery protocol. So there are solutions, but they will add complexity and cost. At the same time they do add genuine security and value. Each application and market will have to find its own balance of the costs and benefits.
RE: Challenge to David Wagner on TCPA
Peter Trei writes: > It's rare enough that when a new anononym appears, we know > that the poster made a considered decision to be anonymous. > > The current poster seems to have parachuted in from nowhere, > to argue a specific position on a single topic. It's therefore > reasonable to infer that the nature of that position and topic has > some bearing on the decision to be anonymous. Yes, my name is "AARG!". That was the first thing my mother said after I was born, and the name stuck. Not really. For Peter's information, the name associated with a message through an anonymous remailer is simply the name of the last remailer in the chain, whatever that remailer operator chose to call it. AARG is a relatively new remailer, but if you look at http://anon.efga.org/Remailers/TypeIIList you will see that it is very reliable and fast. I have been using it as an exit remailer lately because other ones that I have used often produce inconsistent results. It has not been unusual to have to send a message two or three times before it appears. So far that has not been a problem with this one. So don't read too much into the fact that a bunch of anonymous postings have suddenly started appearing from one particular remailer. For your information, I have sent over 400 anonymous messages in the past year to cypherpunks, coderpunks, sci.crypt and the cryptography list (35 of them on TCPA related topics).
RE: Challenge to David Wagner on TCPA
Sampo Syreeni writes: > On 2002-08-01, AARG!Anonymous uttered to [EMAIL PROTECTED],...: > > >It does this by taking hashes of the software before transferring > >control to it, and storing those hashes in its internal secure > >registers. > > So, is there some sort of guarantee that the transfer of control won't be > stopped by a check against cryptographic signature within the executable > itself, in the future? That sort of thing would be trivial to enforce via > licencing terms, after all, and would allow for the introduction of a > strictly limited set of operating systems to which control would be > transferred. TCPA apparently does not have "licensing terms" per se. They say, in their FAQ, http://www.trustedcomputing.org/docs/Website_TCPA%20FAQ_0703021.pdf, "The TCPA spec is currently set up as a 'just publish' IP model." So there are no licensing terms to enforce, and no guarantees that people won't do bad things outside the scope of the spec. Of course, you realize that the same thing is true with PCs today, right? There are few guarantees in this life. If you think about it, TCPA doesn't actually facilitate the kind of crypto-signature-checking you are talking about. You don't need all this fancy hardware and secure hashes to do that. Your worrisome signature checking would be applied on the software which *hasn't yet been loaded*, right? All the TCPA hardware will give you is a secure hash on the software which has already loaded before you ran. That doesn't help you; in fact your code can pretty well predict the value of this, given that it is running. Think about this carefully, it is a complicated point but you can get it if you take your time. In short, to implement a system where only signed code can run, TCPA is not necessary and not particularly helpful. > I'm having a lot of trouble seeing the benefit in TCPA > without such extra measures, given that open source software would likely > evolve which circumvented any protection offered by the more open ended > architecture you now describe. I don't follow what you are getting at with the open source. Realize that when you boot a different OS, the TCPA attestation features will allow third parties to detect this. So your open source OS cannot masquerade as a different one and fool a third party server into downloading data to your software. And likewise, data which was sealed (encrypted) under a secure OS cannot be unsealed once a different OS boots, because the sealing/unsealing is all done on-chip, and the chip uses the secure hash registers to check if the unsealing is allowed. > >Then, when the data is decrypted and "unsealed", the hash is compared to > >that which is in the TPM registers now. This can make it so that data > >which is encrypted when software system X boots can only be decrypted > >when that same software boots. > > Again, such values would be RE'd and reported by any sane open source OS > to the circuitry, giving access to whatever data there is. If this is > prevented, one can bootstrap an absolutely secure platform where whatever > the content provider says is the Law, including a one where every piece of > runnable OS software actually enforces the kind of control over > permissible signatures Peter is so worried about. Where's the guarantee > that this won't happen, one day? Not sure I follow this here... the sealed data cannot be reported by an open source OS because the secret keys never leave the chip without being themselves encrypted. As for your second proposal, you are suggesting that you could write an OS which would only run signed applications? And run it on a TCPA platform? Sure, I guess you could. But you wouldn't need TCPA features to do it. See the comments above: any OS today could be modified to only run apps that were signed with some special key. You shouldn't blame TCPA for this. > >In answer to your question, then, for most purposes, there is no signing > >key that your TPM chip trusts, so the issue is moot. > > At the hardware level, yes. TCPA is a hardware spec. Peter was asking about TCPA, and I gave him the answer. You can hypothesize all the facist software you want, but you shouldn't blame these fantasies on TCPA. > At the software one, it probably won't be, > even in the presence of the above considerations. After you install your > next Windows version, you will be tightly locked in with whatever M$ > throws at you in their DLL's, Doesn't Microsoft already sign their system DLLs in NT? > and as I pointed out, there's absolutely no > guarantee Linux et al. might well be shut out by extra features, in the > future. In the end what we get is an architecture, which may not embody > Pet
Re: Challenge to David Wagner on TCPA
Eric Murray writes: > TCPA (when it isn't turned off) WILL restrict the software that you > can run. Software that has an invalid or missing signature won't be > able to access "sensitive data"[1]. Meaning that unapproved software > won't work. > > [1] TCPAmain_20v1_1a.pdf, section 2.2 We need to look at the text of this in more detail. This is from version 1.1b of the spec: : This section introduces the architectural aspects of a Trusted Platform : that enable the collection and reporting of integrity metrics. : : Among other things, a Trusted Platform enables an entity to determine : the state of the software environment in that platform and to SEAL data : to a particular software environment in that platform. : : The entity deduces whether the state of the computing environment in : that platform is acceptable and performs some transaction with that : platform. If that transaction involves sensitive data that must be : stored on the platform, the entity can ensure that that data is held in : a confidential format unless the state of the computing environment in : that platform is acceptable to the entity. : : To enable this, a Trusted Platform provides information to enable the : entity to deduce the software environment in a Trusted Platform. That : information is reliably measured and reported to the entity. At the same : time, a Trusted Platform provides a means to encrypt cryptographic keys : and to state the software environment that must be in place before the : keys can be decrypted. What this means is that a remote system can query the local TPM and find out what software has been loaded, in order to decide whether to send it some data. It's not that unapproved software "won't work", it's that the remote guy can decide whether to trust it. Also, as stated earlier, data can be sealed such that it can only be unsealed when the same environment is booted. This is the part above about encrypting cryptographic keys and making sure the right software environment is in place when they are decrypted. > Ok, technically it will run but can't access the data, > but that it a very fine hair to split, and depending on the nature of > the data that it can't access, it may not be able to run in truth. > > If TCPA allows all software to run, it defeats its purpose. > Therefore Wagner's statement is logically correct. But no, the TCPA does allow all software to run. Just because a remote system can decide whether to send it some data doesn't mean that software can't run. And just because some data may be inaccessible because it was sealed when another OS was booted, also doesnt mean that software can't run. I think we agree on the facts, here. All software can run, but the TCPA allows software to prove its hash to remote parties, and to encrypt data such that it can't be decrypted by other software. Would you agree that this is an accurate summary of the functionality, and not misleading? If so, I don't see how you can get from this to saying that some software won't run. You might as well say that encryption means that software can't run, because if I encrypt my files then some other programs may not be able to read them. Most people, as you may have seen, interpret this part about "software can't run" much more literally. They think it means that software needs a signature in order to be loaded and run. I have been going over and over this on sci.crypt. IMO the facts as stated two paragraphs up are completely different from such a model. > Yes, the spec says that it can be turned off. At that point you > can run anything that doesn't need any of the protected data or > other TCPA services. But, why would a software vendor that wants > the protection that TCPA provides allow his software to run > without TCPA as well, abandoning those protections? That's true; in fact if you ran it earlier under TCPA and sealed some data, you will have to run under TCPA to unseal it later. The question is whether the advantages of running under TCPA (potentially greater security) outweigh the disadvantages (greater potential for loss of data, less flexibility, etc.). > I doubt many would do so, the majority of TCPA-enabled > software will be TCPA-only. Perhaps not at first, but eventually > when there are enough TCPA machines out there. More likely, spiffy > new content and features will be enabled if one has TCPA and is > properly authenticated, disabled otherwise. But as we have seen > time after time, today's spiffy new content is tomorrows > virtual standard. Right, the strongest case will probably be for DRM. You might be able to download all kinds of content if you are running an OS and application that the server (content provider) trusts. People will have a choice of using TCPA and getting this data legally, or avoiding TCPA and trying to find pirated copies as they do today. > This will require the majority of people to run with TCPA turned on > if they want the content. TCPA does
RE: Challenge to David Wagner on TCPA
Peter Trei writes: > I'm going to respond to AARGH!, our new Sternlight, by asking two questions. > > 1. Why can't I control what signing keys the Fritz chip trusts? > > If the point of TCPA is make it so *I* can trust that *my* computer > to run the software *I* have approved, and refuse to run something > which a virus or Trojan has modifed (and this, btw, is the stated > intention of TCPA), then why the hell don't I have full control over > the keys? If I did, the thing might actually work to my benefit. > > The beneficiary of TCPA when I don't have ultimate root control is > not I. It is someone else. That is not an acceptable situation. You might be surprised to learn that under the TCPA, it is not necessary for the TPM (the so-called "Fritz" chip) to trust *any* signing keys! The TCPA basically provides two kinds of functionality: first, it can attest to the software which was booted and loaded. It does this by taking hashes of the software before transferring control to it, and storing those hashes in its internal secure registers. At a later time it can output those hashes, signed by its internal signature key (generated on-chip, with the private key never leaving the chip). The system also holds a cert issued on this internal key (which is called the Endorsement key), and this cert is issued by the TPM manufacturer (also called the TPME). But this functionality does not require storing the TPME key, just the cert it issued. Second, the TCPA provides for secure storage via a "sealing" function. The way this works, a key is generated and used to encrypt a data blob. Buried in the blob can be a hash of the software which was running at the time of the encryption (the same data which can be reported via the attestation function). Then, when the data is decrypted and "unsealed", the hash is compared to that which is in the TPM registers now. This can make it so that data which is encrypted when software system X boots can only be decrypted when that same software boots. Again, this functionality does not require trusting anyone's keys. Now, there is an optional function which does use the manufacturer's key, but it is intended only to be used rarely. That is for when you need to transfer your sealed data from one machine to another (either because you have bought a new machine, or because your old one crashed). In this case you go through a complicated procedure that includes encrypting some data to the TPME key (the TPM manufacturer's key) and sending it to the manufacturer, who massages the data such that it can be loaded into the new machine's TPM chip. So this function does require pre-loading a manufacturer key into the TPM, but first, it is optional, and second, it frankly appears to be so cumbersome that it is questionable whether manufacturers will want to get involved with it. OTOH it is apparently the only way to recover if your system crashes. This may indicate that TCPA is not feasible, because there is too much risk of losing locked data on a machine crash, and the recovery procedure is too cumbersome. That would be a valid basis on which to criticize TCPA, but it doesn't change the fact that many of the other claims which have been made about it are not correct. In answer to your question, then, for most purposes, there is no signing key that your TPM chip trusts, so the issue is moot. I suggest that you go ask the people who misled you about TCPA what their ulterior motives were, since you seem predisposed to ask such questions. > 2. It's really curious that Mr. AARGH! has shown up simultaneously > on the lists and on sci.crypt, with the single brief of supporting TCPA. > > While I totally support his or her right to post anonymously, I can only > speculate that anonymity is being used to disguise some vested > interest in supporting TCPA. In other words, I infer that Mr. AARGH! > is a TCPA insider, who is embarassed to reveal himself in public. > > So my question is: What is your reason for shielding your identity? > You do so at the cost of people assuming the worst about your > motives. The point of being anonymous is that there is no persistent identity to attribute motives to! Of course I have departed somewhat from this rule in the recent discussion, using a single exit remailer and maintaining continuity of persona over a series of messages. But feel free to make whatever assumptions you like about my motives. All I ask is that you respond to my facts. > Peter Trei > > PS: Speculating about the most tyrannical uses to which > a technology can be put has generally proved a winning > proposition. Of course, speculation is entirely appropriate - when labeled as such! But David Wagner gave the impression that he was talking about facts when he said, "The world is moving toward closed digital rights management systems where you may need approval to run programs," says David Wagner, an assistant professor of computer science at the Univers
Re: Challenge to David Wagner on TCPA
Peter Trei writes: > AARG!, our anonymous Pangloss, is strictly correct - Wagner should have > said "could" rather than "would". So TCPA and Palladium "could" restrict which software you could run. They aren't designed to do so, but the design could be changed and restrictions added. But you could make the same charge about any software! The Mac OS could be changed to restrict what software you can run. Does that mean that we should all stop using Macs, and attack them for something that they are not doing and haven't said they would do? The point is, we should look critically at proposals like TCPA and Palladium, but our criticisms should be based in fact and not fantasy. Saying that they could do something or they might do something is a much weaker argument than saying that they will have certain bad effects. The point of the current discussion is to improve the quality of the criticism which has been directed at these proposals. Raising a bunch of red herrings is not only a shameful and dishonest way to conduct the dispute, it could backfire if people come to realize that the system does not actually behave as the critics have claimed. Peter Fairbrother made a similar point: > The wise general will plan his defences according to his opponent's > capabilities, not according to his opponent's avowed intentions. Fine, but note that at least TCPA as currently designed does not have this specific capability of keeping some software from booting and running. Granted, the system could be changed to allow only certain kinds of software to boot, just as similar changes could be made to any OS or boot loader in existence. Back to Peter Trei (and again, Peter Fairbrother echoed his concern): > However, TCPA and Palladium fall into a class of technologies with a > tremendous potential for abuse. Since the trust model is directed against > the computer's owner (he can't sign code as trusted, or reliably control > which signing keys are trusted), he has ceded ultimate control of what > he can and can't do with his computer to another. Under TCPA, he can do everything with his computer that he can do today, even if the system is not turned off. What he can't do is to use the new TCPA features, like attestation or sealed storage, in such a way as to violate the security design of those systems (assuming of course that the design is sound and well implemented). This is no more a matter of turning over control of his computer than is using an X.509 certificate issued by a CA to prove his identity. He can't violate the security of the X.509 cert. He isn't forced to use it, but if he does, he can't forge a different identity. This is analogous to how the attestation features of TCPA works. He doesn't have to use it, but if he wants to prove what software he booted, he doesn't have the ability to forge the data and lie about it. > Sure, TCPA can be switched off - until that switch is disabled. It > could potentially be permenantly disabled by a BIOS update, a > security patch, a commercial program which carries signed > disabling code as a Trojan, or over the net through a backdoor or > vulnerability in any networked software. Or by Congress > which could make running a TCPA capable machine with TCPA > turned off illegal. This is why the original "Challenge" asked for specific features in the TCPA spec which could provide this claimed functionality. Even if TCPA is somehow kept turned on, it will not stop any software from booting. Now, you might say that they can then further change the TCPA so that it *does* stop uncertified software from booting. Sure, they could. But you know what? They could do that without the TCPA hardware. They could put in a BIOS that had a cert in it and only signed OS's could boot. That's not what TCPA does, and it's nothing like how it works. A system like this would be a very restricted machine and you might justifiably complain if the manufacturer tried to make you buy one. But why criticize TCPA for this very different functionality, which doesn't use the TCPA hardware, the TCPA design, and the TCPA API? > With TCPA, I now have to trust that a powerful third party, over > which I have no control, and which does not necessarily have > my interests are heart, will not abuse it's power. I don't > want to have to do that. How could this be true, when there are no features in the TCPA design to allow this powerful third party to restrict your use of your computer in any way? (By the way, does anyone know why these messages are appearing on cypherpunks but not on the [EMAIL PROTECTED] mailing list, when the responses to them show up in both places? Does the moderator of the cryptography list object to anonymous messages? Or does he think the quality of them is so bad that they don't deserve to appear? Or perhaps it is a technical problem, that the anonymous email can't be delivered to his address? If someone replies to this message, please include this fin
Re: Challenge to David Wagner on TCPA
James Donald writes: > TCPA and Palladium give someone else super root privileges on my > machine, and TAKE THOSE PRIVILEGES AWAY FROM ME. All claims that > they will not do this are not claims that they will not do this, > but are merely claims that the possessor of super root privilege > on my machine is going to be a very very nice guy, unlike my > wickedly piratical and incompetently trojan horse running self. What would be an example of a privilege that you fear would be taken away from you with TCPA? It will boot any software that you want, and can provide a signed attestation of a hash of what you booted. Are you upset because you can't force the chip to lie about what you booted? Of course they could have designed the chip to allow you to do that, but then the functionality would be useless to everyone; a chip which could be made to lie about its measurements might as well not exist, right?
Re: Challenge to David Wagner on TCPA
James Donald wrote: > On 29 Jul 2002 at 15:35, AARG! Anonymous wrote: > > both Palladium and TCPA deny that they are designed to restrict > > what applications you run. The TPM FAQ at > > http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf reads > > They deny that intent, but physically they have that capability. Maybe, but the point is whether the architectural spec includes that capability. After all, any OS could restrict what applications you run; you don't need special hardware for that. The question is whether restrictions on software are part of the design spec. You should be able to point to something in the TCPA spec that would restrict or limit software, if that is the case. Or do you think that when David Wagner said, "Both Palladium and TCPA incorporate features that would restrict what applications you could run," he meant "that *could* restrict what applications you run"? They *could* impose restrictions, just like any OS could impose restrictions. But to say that they *would* impose restrictions is a stronger statement, don't you think? If you claim that an architecture would impose restrictions, shouldn't you be able to point to somewhere in the design document where it explains how this would occur? There's enormous amount of information in the TCPA spec about how to measure the code which is going to be run, and to report those measurement results so third parties can know what code is running. But there's not one word about preventing software from running based on the measurements.
Challenge to David Wagner on TCPA
Declan McCullagh writes at http://zdnet.com.com/2100-1107-946890.html: "The world is moving toward closed digital rights management systems where you may need approval to run programs," says David Wagner, an assistant professor of computer science at the University of California at Berkeley. "Both Palladium and TCPA incorporate features that would restrict what applications you could run." But both Palladium and TCPA deny that they are designed to restrict what applications you run. The TPM FAQ at http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf reads, in answer #1: : The TPM can store measurements of components of the user's system, but : the TPM is a passive device and doesn't decide what software can or : can't run on a user's system. An apparently legitimate but leaked Palladium White Paper at http://www.neowin.net/staff/users/Voodoo/Palladium_White_Paper_final.pdf says, on the page shown as number 2: : A Palladium-enhanced computer must continue to run any existing : applications and device drivers. and goes on, : In addition, Palladium does not change what can be programmed or run : on the computing platform; it simply changes what can be believed about : programs, and the durability of those beliefs. Of course, white papers and FAQs are not technical documents and may not be completely accurate. To really answer the question, we need to look at the spec. Unfortunately there is no Palladium spec publicly available yet, but we do have one for TCPA, at http://www.trustedcomputing.org/docs/main%20v1_1b.pdf. Can you find anything in this spec that would do what David Wagner says above, restrict what applications you could run? Despite studying this spec for many hours, no such feature has been found. So here is the challenge to David Wagner, a well known and justifiably respected computer security expert: find language in the TCPA spec to back up your claim above, that TCPA will restrict what applications you can run. Either that, or withdraw the claim, and try to get Declan McCullagh to issue a correction. (Good luck with that!) And if you want, you can get Ross Anderson to help you. His reports are full of claims about Palladium and TCPA which seem equally unsupported by the facts. When pressed, he claims secret knowledge. Hopefully David Wagner will have too much self-respect to fall back on such a convenient excuse.
Re: Hollywood Hackers
On Mon, 29 Jul 2002 14:25:37 -0400 (EDT), you wrote: > > Congressman Wants to Let Entertainment Industry Get Into Your Computer > > Rep. Howard L. Berman, D-Calif., formally proposed > legislation that would give the industry unprecedented new > authority to secretly hack into consumers' computers or knock > them off-line entirely if they are caught downloading > copyrighted material. > > I've been reading things like this for a while but I wonder how practical > such an attack would be. They won't be able to hack into computers with > reasonable firewalls and while they might try DOS attacks, upstream > connectivity suppliers might object. Under current P2P software they may > be able to do a little hacking but the opposition will rewrite the > software to block. DOS attacks and phony file uploads can be defeated > with digital signatures and reputation systems (including third party > certification). Another problem -- Napster had 55 million customers. > That's a lot of people to attack. I don't think Hollywood has the troops. I like this scenario: Adam places his copyrighted content on his web site. His friend, Eve, violates his copyright and places Adam's copyrighted content on her site. Hollywood downloads the copyright-infringing content from Eve's site. Eve confesses that Hollywood did so, in a good faith effort to repent from her copyright infringement. Now Adam hacks Hollywood, as authorized by the proposed law. Lawsuits all around.
Re: DRM will not be legislated
Read a great article on Slashdot about the recent DRM workshop, http://slashdot.org/article.pl?sid=02/07/18/1219257, by "al3x": As the talks began, I was brimming with the enthusiasm and anger of an "activist," overjoyed at shaking hands with the legendary Richard Stallman, thrilled with the turnout of the New Yorkers for Fair Use. My enthusiasm and solidarity, however, was to be short lived Comments from the RIAA's Mitch Glazier that there is "balance in the Digital Millennium Copyright Act" (DMCA), drew cries and disgusted laughter from the peanut gallery, who at that point had already been informed that any public comments could be submitted online. Even those in support of Fair Use and similar ideas began to be frustrated with the constant background commentary and ill-conceived outbursts of the New Yorkers for Fair Use and, to my dismay, Richard Stallman, who proved to be as socially awkward as his critics and fans alike report. Perhaps such behavior is entertaining in a Linux User Group meeting or academic debate, but fellow activists hissed at Stallman and the New Yorkers, suggesting that their constant interjections weren't helping. And indeed, as discussion progressed, I felt that my representatives were not Stallman and NY Fair Use crowd, nor Graham Spencer from DigitalConsumer.org, whose three comments were timid and without impact. No, I found my voice through Rob Reid, Founder and Chairman of Listen.com, whose realistic thinking and positive suggestions were echoed by Johnathan Potter, Executive Director of DiMA, and backed up on the technical front by Tom Patton of Phillips. Reid argued that piracy was simply a reality of the content industry landscape, and that it was the job of content producers and the tech industry to offer consumers something "better than free." "We charge $10 a month for our service, and the competition is beating us by $10 a month. We've got to give customers a better experience than the P2P file-sharing networks," Reid suggested. As the rare individual who gave up piracy when I gave up RIAA music and MPAA movies, opting instead for a legal and consumer-friendly Emusic.com account, I found myself clapping in approval. Reading this and the other comments on the meeting, a few facts come through: that the content companies are much more worried about closing the "analog hole" than mandating traditional DRM software systems; that the prospects for any legislation on these issues are uncertain given the tremendous consumer opposition; and that extremist consumer activists are hurting their cause by conjuring up farfetched scenarios that expose them as kooks. (That last point certainly applies to those here who continue to predict that the government will take away general purpose computing capabilities, allow only "approved" software to run, and ban the use of Perl and Python without a license. Try visiting the real world sometime!) It is also good to see that the voices of sanity are being more and more recognized, like the Listen.com executive above. The cyber liberty community must come out strongly against piracy of content and support experiments which encourage people to pay for what they download. It is no longer tenable to claim that intellectual property is obsolete or evil, or to point to the complaints of a few musicians as justification for ignoring the creative rights of an entire industry. There is still a very good chance that we can have a future where people will happily pay for legal content instead of making do with bootleg pirate recordings, and that this can happen without legislation and without hurting consumer choice. Such an outcome would be the best for all concerned: for consumers, for tech companies, for artists and for content licensees. Anything else will be a disaster for one or more of these groups, which will ultimately hurt everyone. Let's hope the EFF is listening to the kinds of clear-sighted commentary quoted above.
Re: DRM will not be legislated
David Wagner wrote: > You argue that it would be irrational for content companies to push to > have DRM mandated. This is something we could debate at length, but we > don't need to: rational or not, we already have evidence that content > companies have pushed, and *are* pushing, for some kind of mandated DRM. > > The Hollings bill was interesting not for its success or failure, but > for what it reveals the content companies' agenda. It seems plausible > that its supporters will be back next year with a "compromise" bill -- > plausible enough that we'd better be prepared for such a circumstance. The CBDTPA, available in text form at http://www.politechbot.com/docs/cbdtpa/hollings.s2048.032102.html, does not explicitly call for legislating DRM. In fact the bill is not very clear about what exactly it does require. Generally it calls for standards that satisfy subsections (d) and (e) of section 3. But (d) is just a list of generic good features: "(A) reliable; (B) renewable; (C) resistant to attack; (D) readily implemented; (E) modular; (F) applicable in multiple technology platforms; (G) extensible; (H) upgradable; (I) not cost prohibitive; and (2) any software portion of such standards is based on open source code." There's nothing in there about DRM or the analog hole specifically. In fact the only phrase in this list which would not be applicable to any generic software project is "resistant to attack". And (e) (misprinted as (c) in the document) is a consumer protection provision, calling for support of fair use and home taping of over the air broadcasts. Neither (d) nor (e) describes what exactly the CBDTPA is supposed to do. To understand what the technical standards are supposed to protect we have to look at section 2 of the bill, "Findings", which lays out the piracy problem as Hollings sees it and calls for government regulation and mandates for solutions. But even here, the wording is ambiguous and does not clearly call for mandating DRM. The structure of this section consists of a list of statements, followed by the phrase, "A solution to this problem is technologically feasible but will require government action, including a mandate to ensure its swift and ubiquitous adoption." This phrase appears at points 12, 15 and 19. The points leading up to #12 refer to the problems of over the air broadcasts being unencrypted, in contrast with pay cable and satellite systems. The points leading up to #15 talk about closing the analog hole. And the points leading up to #19 discuss file sharing and piracy. DRM is mentioned in point 5, in terms of it not working well, then the concept is discussed again in points 20-23, which are the last. None of these comments are followed by the magic phrase about requiring a government mandate. So if you look closely at how these points are laid out, and which ones get the call for government action, it appears that the main concerns which the CBDTPA is intended to address are (1) over the air broadcasts (via the BPDG standard); (2) closing the analog hole (via HDCP and similar); and (3) piracy via file sharing and P2P systems, which the media companies would undoubtedly like to see shut down but where they are unlikely to succeed. Although DRM is mentioned, there is no clear call to mandate support for DRM technology, particularly anything similar to Palladium or the TCPA, which is what we have been discussing. As pointed out earlier, this is logical, as legislating the TCPA would be both massively infeasible and also ultimately unhelpful to the goals of the content companies. They know they won't be able to use TCPA to shut down file sharing. The only way they could approach it using such a tool would be to have a law requiring a government stamp of approval on every piece of software that runs. Surely it will be clear to all reasonable men what a a non-starter that idea is.
Re: DRM will not be legislated
David Wagner wrote: > Anonymous wrote: > > Legislation of DRM is not in the cards, [...] > > Care to support this claim? (the Hollings bill and the DMCA requirement > for Macrovision in every VCR come to mind as evidence to the contrary) The line you quoted was the summary from a message which described the detailed reasoning that supported the claim. To reiterate and lay out the points explicitly: - Legislating DRM would be extremely expensive in the current environment as it would require phasing out all computers presently in use. This provides a huge practical burden and barrier for any legislation along these lines. - Some have opposed voluntary DRM because they believe that it would reduce the barrier above. Once DRM systems are voluntarily installed in a substantial number of systems, it would be a relatively small step to mandate them in all systems. - But this is false reasoning; if DRM is so successful as to be present in a substantial number of systems, it is not necessary to legislate it. - Further, even if it is legislated, that will not stop piracy. No practical DRM system will prevent people from running arbitrary 3rd party software (despite absurd arguments by fanatics that the government seeks to remove Turing complete computing capabilities from the population). - Neither the content nor technology companies have incentive to support legislation, as they still must convince people that paying for content is superior to pirating it. Legislating DRM will not help them in this battle, as piracy will still be an alternative. - What would help them legislatively is some kind of enforced watermarking technology, so that the initial "ripping" of content is impossible (this also requires closing the analog hole). Only by intervening at this first step can they hope to break the piracy chain, and this is the real purpose of the Hollings bill. See also the recent work by the BPDG. But this is not DRM in the sense we are discussing it here. Those were the points made earlier in support of the summary statement quoted above. As far as the Hollings bill in particular, the most notable aspect of it was the tremendous opposition from virtually every sector of the economy. The Hollings bill was not just a failure, it was a massive, DOA, stinking heap of failure which had not even the slightest chance of success. If anything, the failure of the Hollings bill fully supports the thesis that legislation of DRM is not going to happen. As for Macrovision, this is an example of "watermarking" technology and as mentioned above, it does make sense to legislate along these lines (although it is questionable whether it can work in the long run - Macrovision defeaters are widely available). It represents an attempt to close the analog hole. The point is that this is not a simple-minded or unreflective analysis. We are looking specifically at the kind of DRM enabled by the TCPA. This means the ability to run content viewing software that imposes DRM rules which might limit the number of views, or require pay per view, or require data to be deleted if it is copied elsewhere, etc. The point of TCPA and Palladium is for the remote content provider to be assured that the software it is talking to across the net is a trusted piece of software which will enforce the rules. It is this kind of DRM to which the analysis above is directed. This DRM does not prevent piracy using any of the techniques available today, or via exploiting bugs and flaws in future technology. It does not and can not prevent people from running file sharing programs and making pirated content available on the Internet (at least without crippling computers to the point where necessary business functionality is lost, which would mean sending the country into a deep depression and making it an obsolete competitor on world markets, i.e. it won't happen). This kind of DRM can nevertheless succeed on a voluntary basis by providing good quality for good value, in conjunction with technological and legal attacks on P2P systems such as are in their infancy now. All of these arguments have been made in the past few weeks on this list. Hopefully reiterating them in one place will be helpful to those who have overlooked them in the past.
Re: Ross's TCPA paper
Seth Schoen writes: > The Palladium security model and features are different from Unix, but > you can imagine by rough analogy a Unix implementation on a system > with protected memory. Every process can have its own virtual memory > space, read and write files, interact with the user, etc. But > normally a program can't read another program's memory without the > other program's permission. > > The analogy starts to break down, though: in Unix a process running as > the superuser or code running in kernel mode may be able to ignore > memory protection and monitor or control an arbitrary process. In > Palladium, if a system is started in a trusted mode, not even the OS > kernel will have access to all system resources. Wouldn't it be more accurate to say that a "trusted" OS will not peek at system resources that it is not supposed to? After all, since the OS loads the application, it has full power to molest that application in any way. Any embedded keys or certs in the app could be changed by the OS. There is no way for an application to protect itself against the OS. And there is no need; a trusted OS by definition does not interfere with the application's use of confidential data. It does not allow other applications to get access to that data. And it provides no back doors for "root" or the system owner or device drivers to get access to the application data, either. At http://vitanuova.loyalty.org/2002-07-03.html you provide more information about your meeting with Microsoft. It's an interesting writeup, but the part about the system somehow protecting the app from the OS can't be right. Apps don't have that kind of structural integrity. A chip in the system cannot protect them from an OS virtualizing that chip. What the chip does do is to let *remote* applications verify that the OS is running in trusted mode. But local apps can never achieve that degree of certainty, they are at the mercy of the OS which can twiddle their bits at will and make them "believe" anything it wants. Of course a "trusted" OS would never behave in such an uncouth manner. > That limitation > doesn't stop you from writing your own application software or scripts. Absolutely. The fantasies which have been floating here of filters preventing people from typing virus-triggering command lines are utterly absurd. What are people trying to prove by raising such nonsensical propositions? Palladium needs no such capability. > Interestingly, Palladium and TCPA both allow you to modify any part of > the software installed on your system (though not your hardware). The > worst thing which can happen to you as a result is that the system > will know that it is no longer "trusted", or will otherwise be able to > recognize or take account of the changes you made. In principle, > there's nothing wrong with running "untrusted"; particular applications > or services which relied on a trusted feature, including sealed > storage (see below), may fail to operate. Right, and you can boot untrusted OS's as well. Recently there was discussion here of HP making a trusted form of Linux that would work with the TCPA hardware. So you will have options in both the closed source and open source worlds to boot trusted OS's, or you can boot untrusted ones, like old versions of Windows. The user will have more choice, not less. > Palladium and TCPA both allow an application to make use of > hardware-based encryption and decryption in a scheme called "sealed > storage" which uses a hash of the running system's software as part of > the key. One result of this is that, if you change relevant parts of > the software, the hardware will no longer be able to perform the > decryption step. To oversimplify slightly, you could imagine that the > hardware uses the currently-running OS kernel's hash as part of this > key. Then, if you change the kernel in any way (which you're > permitted to do), applications running under it will find that they're > no longer able to decrypt "sealed" files which were created under the > original kernel. Rebooting with the original kernel will restore the > ability to decrypt, because the hash will again match the original > kernel's hash. Yes, your web page goes into somewhat more detail about how this would work. This way a program can run under a secure OS and store sensitive data on the disk, such that booting into another OS will then make it impossible to decrypt that data. Some concerns have been raised here about upgrades. Did Microsoft discuss how that was planned to work, migrating from one version of a secure OS to another? Presumably they have different hashes, but it is necessary for the new one to be able to unseal data sealed by the old one. One obvious solution would be for the new OS to present a cert to the chip which basically said that its OS hash should be treated as an "alias" of the older OS's hash. So the chip would unseal using the old OS hash even when the new OS wa
Re: 2 Challenge Gun Cases, Citing Bush Policy
>and being able to kill each and every one from behind. >Don't expose yourselves -- always shoot from behind. But know this one thing Aim for the head, and use fragmenting/hydrashock ammo. Exploded heads seem to disturb others the most.
Re: NYT: Techies Now Respect Government
What really changed in the Valley is that the best are gone. There is always a very small number of real contributors, I'd say one in several hundreds, that shape the whole environment and dictate the overall mood. This was best seen in Xerox PARC, where sleazy Gilman Louie was selling fatherland defense on May 16, with mannerism and vocabulary of a polished used car salesman. He was preaching to an auditorium packed with white middle managers and young aspiring nobodies, extracting applause and laughs at all the right places. No one threw up, and at the end he even didn't have to say "MEIN GOTT I CAN WALK !!" It was implied. He said, after describing his enlightment that working for CIA is good after all, in the best tradition of government commercials from 50-ties, that VCs were always patriotic. He also said that they received 500 business plans in few weeks after demolition of WTC, and that government needs better tools to track arab student pilots. This is the new silicon valley, future grounds of the Homeland Security Industries, where thousands of engineers will proudly churn out surveillance products, dissent-detecting chips and network tapping devices.
Re: How not to defend yourself against hacking charges
Another happy customer of the Jim Bell Pro Bono Self-Representation HappyFunPack(TM)? Order now and get 6 foot of rope free! What you do with it is of course your business... -Original Message- http://theregus.com/content/55/24357.html Accused eBay hacker Jerome Heckenkamp is back behind bars tonight, after his first solo court appearance in front of his trial judge took an odd turn. During what was to be a routine proceeding to set future court dates, Heckenkamp challenged the indictment against him on the grounds that it spells his name, Jerome T. Heckenkamp, in all capital letters, while he spells it with the first letter capitalized, and subsequent letters in lower case. Last week, Heckenkamp, 22, fired attorney Jennifer Granick, and co-counsel Marjorie Allard, in order to personally defend himself against two federal grand jury indictments charging that he cracked computers at eBay, Lycos, Exodus Communications, and other companies in 1999. It was the second time Heckenkamp fired his lawyers -- in January, he had a federal magistrate appoint him as his own counsel, only to change his mind the same day. At Monday's appearance, Judge James Ware seemed initially perplexed by Heckenkamp's challenge, and spent some time explaining the nature of the proceedings. Finally, he advised Heckenkamp to take it up in front of a jury when he goes to trial. "I cannot help but comment that you have substituted out a capable attorney," the judge added. Heckenkamp went on to demand that he be immediately allowed to take the stand and testify, and was again rebuffed by Ware, who noted that the appearance was not a hearing or a trial. The computer whiz then asked the court to identify the plaintiff in the case. Ware explained that the United States was the plaintiff, and was represented by assistant U.S. attorney Ross Nadel. Heckenkamp said he wanted to subpoena Nadel's "client" to appear in court, and Ware asked him who, exactly, he wanted to bring into the courtroom. When Heckenkamp replied, "The United States of America," Ware ordered him taken into custody. "The comments that you are making to the court lead me to suspect that either you are playing games with the court, or you're experiencing a serious lack of judgment," said Ware. The judge added that he was no longer satisfied that Heckenkamp would make his future court appearances. Heckenkamp had been free on $50,000 bail, and living under electronic monitoring -- prohibited by court order from using cell phones, the Internet, computers, video games and fax machines. Before two deputy U.S. marshals hauled Heckenkamp away, he threatened legal action against the judge. "I will hold you personally liable," he said. "I will seek damages for every hour that I'm in custody." In a telephone interview after the appearance, Heckenkamp's father, Thomas Heckenkamp, said his son is only trying to protect his rights. "They've overstepped their bounds, and they're keeping him from defending himself," he said. Heckenkamp's next court appearance in San Jose is scheduled for April 8th. Trial in a related case in San Diego is set for April 23rd.