Re: full-disk subversion standards released
Peter Gutmann wrote: (Does anyone know of any studies that have been done to find out how prevalent this is for servers? I can see why you'd need to do it for software-only implementations in order to survive restarts, but what about hardware-assisted TLS? Is there anything like a study showing that for a random sampling of x web servers, y stored the keys unprotected? Are you counting things like Windows' DPAPI, which any IIS setup should use, as "protected" or "unprotected"?) We recently had some discussion about this inside Sun. Not just for TLS but for IKE as well. Until very recently our IKE daemon required the PKCS#11 PIN to be on disk (readable only by root) even if you were using sensitive and non extractable keys in a hardware keystore. We changed that to provide an admin command to interactively load the key. However we know that this won't actually be used on the server side in many case, and not in a cluster (the Solaris/OpenSolaris IKE and IPsec is cluster capable). For Web servers the situation was similar, either the naked private key was on disk or the PKCS#11 PIN that allowed access to it was. I solicited information here about crypto accellerators with onboard persistent key memory ("secure key storage") about two years ago and got basically no responses except pointers to the same old, discontinued or obsolete products I was trying to replace. I was hoping someone else would leap in about now and question this, but I guess I'll have to do it... maybe we have a different definition of what's required here, but AFAIK there's an awful lot of this kind of hardware floating around out there, admittedly it's all built around older crypto devices like Broadcom 582x's and Cavium's Nitrox (because there hasn't been any real need to come up with replacements) but I didn't think there'd be much problem with finding the necessary hardware, unless you've got some particular requirement that rules a lot of it out. The Sun CA-6000 card I just pointed to in my other email is such a card it uses Broadcom 582x. -- Darren J Moffat - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Thor Lancelot Simon wrote: To the extent of my knowledge there are currently _no_ generally available, general-purpose crypto accellerator chip-level products with onboard key storage or key wrapping support, with the exception of parts first sold more than 5 years ago and being shipped now from old stock. CA-6000 supports on board key storage and key wrapping. It even supports the NIST AES Keywrap algorithm. This card is certainly newer than 5 years old, in fact when we first released it we had some deployment issues because we had created a PCIe only card and several customers wanted to put on in machines that didn't have PCIe capability. -- Darren J Moffat - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Thor Lancelot Simon wrote: No, no there's not. In fact, I solicited information here about crypto accellerators with onboard persistent key memory ("secure key storage") about two years ago and got basically no responses except pointers to the same old, discontinued or obsolete products I was trying to replace. I wouldn't normally play marketeer but since you asked did you look at this product ? Either way I'd be interested in your view on it. http://www.sun.com/products/networking/sslaccel/suncryptoaccel6000/index.xml Please ignore the "sslaccel" in the URL this card doesn't know anything about SSL it is a pure Crypto accelerator and keystore with a FIPS 140-2 @ Level certification. Support on Solaris, OpenSolaris, RHEL 5 and SuSE 10. It has the ability to have centralised key management and shared keystores (within and across machines). It even has Eliptic Curve support available. -- Darren J Moffat - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Sun, Mar 15, 2009 at 12:26:39AM +1300, Peter Gutmann wrote: > > I was hoping someone else would leap in about now and question this, but I > guess I'll have to do it... maybe we have a different definition of what's > required here, but AFAIK there's an awful lot of this kind of hardware > floating around out there, admittedly it's all built around older crypto > devices like Broadcom 582x's and Cavium's Nitrox (because there hasn't been > any real need to come up with replacements) but I didn't think there'd be much > problem with finding the necessary hardware, unless you've got some particular > requirement that rules a lot of it out. Nitrox doesn't have onboard key memory. Cavium's FIPS140 certified Nitrox board-level solutions include a smartcard and a bunch of additional hardware and software which implement (among other things) secure key storage -- but these are a world apart from the run of the mill Nitrox parts one finds embedded in all kinds of commonplace devices. They also provide an API which is tailored for FIPS140 compliance: good if you need it, far from ideal for the common case for web servers, and very different from the standard set of tools one gets for the bare Nitrox platform. There are of course similar board-level solutions using BCM582x as the crypto core. But in terms of cost and complexity I might as well just use custom hardware -- I'd probably come out ahead. And you can't just _ignore_ performance, nor new algorithms, so eventually using very old crypto cores makes the whole thing fail to fly. (If "moderate" performance will suffice, I note that NBMK Encryption will still sell you the old NetOctave NSP2000, which is a pretty nice design that has onboard key storage but lacks AES, larger SHA variants, and other modern features). To the extent of my knowledge there are currently _no_ generally available, general-purpose crypto accellerator chip-level products with onboard key storage or key wrapping support, with the exception of parts first sold more than 5 years ago and being shipped now from old stock. This was once a somewhat common feature on accellerators targetted at the SSL/IPsec market. That appears to no longer be the case. -- Thor Lancelot Simont...@rek.tjls.com "Even experienced UNIX users occasionally enter rm *.* at the UNIX prompt only to realize too late that they have removed the wrong segment of the directory structure." - Microsoft WSS whitepaper - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Thor Lancelot Simon writes: >Almost no web servers run with passwords on their private key files. Believe >me. I build server load balancers for a living and I see a _lot_ of customer >web servers -- this is how it is. Ah, that kinda makes sense, it would parallel the experience with client-side keys (SSH in this case since client-side PKI is virtually nonexistent) were nearly 2/3 of all private keys were found to be stored in plaintext form on shared machines. This is why a security developer some years ago started referring to the private key as "the lesser-known public key" :-). (Does anyone know of any studies that have been done to find out how prevalent this is for servers? I can see why you'd need to do it for software-only implementations in order to survive restarts, but what about hardware-assisted TLS? Is there anything like a study showing that for a random sampling of x web servers, y stored the keys unprotected? Are you counting things like Windows' DPAPI, which any IIS setup should use, as "protected" or "unprotected"?) >I solicited information here about crypto accellerators with onboard >persistent key memory ("secure key storage") about two years ago and got >basically no responses except pointers to the same old, discontinued or >obsolete products I was trying to replace. I was hoping someone else would leap in about now and question this, but I guess I'll have to do it... maybe we have a different definition of what's required here, but AFAIK there's an awful lot of this kind of hardware floating around out there, admittedly it's all built around older crypto devices like Broadcom 582x's and Cavium's Nitrox (because there hasn't been any real need to come up with replacements) but I didn't think there'd be much problem with finding the necessary hardware, unless you've got some particular requirement that rules a lot of it out. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Sat, Mar 07, 2009 at 07:36:25AM +1300, Peter Gutmann wrote: > > In any case though, how big a deal is private-key theft from web servers? > What examples of real-world attacks are there where an attacker stole a > private key file from a web server, brute-forced the password for it, and then > did... well, what with it? I don't mean what you could in theory do with it, > I mean which currently-being-exploited attack vector is this helping with? Almost no web servers run with passwords on their private key files. Believe me. I build server load balancers for a living and I see a _lot_ of customer web servers -- this is how it is. > This does seem like rather a halfway point to be in though, if you're not > worried about private-key theft from the server then do it in software, and if > you are then do the whole thing in hardware (there's quite a bit of this > around for SSL offload) No, no there's not. In fact, I solicited information here about crypto accellerators with onboard persistent key memory ("secure key storage") about two years ago and got basically no responses except pointers to the same old, discontinued or obsolete products I was trying to replace. -- Thor Lancelot Simont...@rek.tjls.com "Even experienced UNIX users occasionally enter rm *.* at the UNIX prompt only to realize too late that they have removed the wrong segment of the directory structure." - Microsoft WSS whitepaper - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Thor Lancelot Simon writes: >On Sat, Mar 07, 2009 at 05:40:31AM +1300, Peter Gutmann wrote: >> Given that, when I looked a couple of years ago, TPM support for >> public/private-key stuff was rather hit-and-miss and in some cases seemed to >> be entirely absent (so you could use the TPM to wrap and unwrap stored >> private >> keys > >But this, itself, is valuable. Given trivial support in the operating system >kernel, it eliminates one of the most common key-theft attack vectors against >webservers. Kent would be the one to answer this definitively, but the docs on the web page talk about using OpenSSL to change the password on the stored keys, without (apparently) needing the TPM, which seems a bit odd. In any case though, how big a deal is private-key theft from web servers? What examples of real-world attacks are there where an attacker stole a private key file from a web server, brute-forced the password for it, and then did... well, what with it? I don't mean what you could in theory do with it, I mean which currently-being-exploited attack vector is this helping with? This does seem like rather a halfway point to be in though, if you're not worried about private-key theft from the server then do it in software, and if you are then do the whole thing in hardware (there's quite a bit of this around for SSL offload) rather than just one small corner of it. If your target market is "people who are worried about theft of private key files (but not in-memory keys) from web servers and who don't want to use hardware to protect them and who are running a server that actually has a TPM installed" then I suspect you've limited your applicability somewhat... Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Sat, Mar 07, 2009 at 05:40:31AM +1300, Peter Gutmann wrote: > > Given that, when I looked a couple of years ago, TPM support for > public/private-key stuff was rather hit-and-miss and in some cases seemed to > be entirely absent (so you could use the TPM to wrap and unwrap stored private > keys But this, itself, is valuable. Given trivial support in the operating system kernel, it eliminates one of the most common key-theft attack vectors against webservers. I must admit I'm curious whether the TPM vendors are licensing the relevant IBM patent on what amounts to any wrapping of cryptographic keys using encryption - I can only assume they are. Thor - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Thu, Mar 5, 2009 at 12:13 PM, Kent Yoder wrote: > Hi Peter, > >>>Apart from the obvious fact that if the TPM is good for DRM then it is also >>>good for protecting servers and the data on them, >> >> In which way, and for what sorts of "protection"? And I mean that as a >> serious inquiry, not just a "Did you spill my pint?" question. At the moment >> the sole significant use of TPMs is Bitlocker, which uses it as little more >> than a PIN-protected USB memory key and even then functions just as well >> without it. To take a really simple usage case, how would you: >> >> - Generate a public/private key pair and use it to sign email (PGP, S/MIME, >> take your pick)? > > I had this working using openCryptoki, the trousers TSS and Mozilla > Thunderbird on openSUSE Linux. If the setup instructions aren't in > the various readmes of those projects I can help you set it up if > you'd like. > >> - As above, but send the public portion of the key to someone and use the >> private portion to decrypt incoming email? > > A simple PKCS#11 app to extract the public key is all that's needed > with the above tools. > >> (for extra points, prove that it's workable by implementing it using an >> actual >> TPM to send and receive email with it, which given the hit-and-miss > > Done. :-) Last time I tested this it worked fine... Circa > 2006..- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Hi Peter, >>Apart from the obvious fact that if the TPM is good for DRM then it is also >>good for protecting servers and the data on them, > > In which way, and for what sorts of "protection"? And I mean that as a > serious inquiry, not just a "Did you spill my pint?" question. At the moment > the sole significant use of TPMs is Bitlocker, which uses it as little more > than a PIN-protected USB memory key and even then functions just as well > without it. To take a really simple usage case, how would you: > > - Generate a public/private key pair and use it to sign email (PGP, S/MIME, > take your pick)? I had this working using openCryptoki, the trousers TSS and Mozilla Thunderbird on openSUSE Linux. If the setup instructions aren't in the various readmes of those projects I can help you set it up if you'd like. > - As above, but send the public portion of the key to someone and use the > private portion to decrypt incoming email? A simple PKCS#11 app to extract the public key is all that's needed with the above tools. > (for extra points, prove that it's workable by implementing it using an actual > TPM to send and receive email with it, which given the hit-and-miss Done. :-) Last time I tested this it worked fine... Circa 2006... Kent > functionality and implementation quality of TPMs is more or less a required > second step). I've implemented PGP email using a Fortezza card (which is > surely the very last thing it was ever intended for), but not using a TPM... > >>Mark Ryan presented a plausible use case that is not DRM: >>http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/. > > This use is like the joke about the dancing bear, the amazing thing isn't the > quality of the "dancing" but the fact that the bear can "dance" at all :-). > It's an impressive piece of lateral thinking, but I can't see people rushing > out to buy TPM-enabled PCs for this. > > Peter. > > - > The Cryptography Mailing List > Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com > - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Alexander Klimov wrote: > On Wed, 11 Feb 2009, Ben Laurie wrote: >> If I have data on my server that I would like to stay on my server >> and not get leaked to some third party, then this is exactly the >> same situation as DRMed content on an end user's machine, is it not? > > The treat model is completely different: for DRM the attacker is the > user who supposedly has complete access to computer, while for server > the attacker is someone who has only (limited) network connection to > your server. You wish. The threat is an attacker who has root on your machine. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Ben Laurie wrote: If I have data on my server that I would like to stay on my server and not get leaked to some third party, then this is exactly the same situation as DRMed content on an end user's machine, is it not? No. You want to keep control of the information on your server. DRM wants to deny the end user control of the information on the end user's machine. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Wed, 11 Feb 2009, Ben Laurie wrote: > If I have data on my server that I would like to stay on my server > and not get leaked to some third party, then this is exactly the > same situation as DRMed content on an end user's machine, is it not? The treat model is completely different: for DRM the attacker is the user who supposedly has complete access to computer, while for server the attacker is someone who has only (limited) network connection to your server. -- Regards, ASK - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Peter Gutmann wrote: > Ben Laurie writes: > >> Apart from the obvious fact that if the TPM is good for DRM then it is also >> good for protecting servers and the data on them, > > In which way, and for what sorts of "protection"? And I mean that as a > serious inquiry, not just a "Did you spill my pint?" question. If I have data on my server that I would like to stay on my server and not get leaked to some third party, then this is exactly the same situation as DRMed content on an end user's machine, is it not? > At the moment > the sole significant use of TPMs is Bitlocker, which uses it as little more > than a PIN-protected USB memory key and even then functions just as well > without it. To take a really simple usage case, how would you: > > - Generate a public/private key pair and use it to sign email (PGP, S/MIME, > take your pick)? > - As above, but send the public portion of the key to someone and use the > private portion to decrypt incoming email? > > (for extra points, prove that it's workable by implementing it using an actual > TPM to send and receive email with it, which given the hit-and-miss > functionality and implementation quality of TPMs is more or less a required > second step). I've implemented PGP email using a Fortezza card (which is > surely the very last thing it was ever intended for), but not using a TPM... Note that I am not claiming expertise in the use of TPMs. I am making the claim that _if_ they are good for DRM, _then_ they are also good for protecting data on servers. >> Mark Ryan presented a plausible use case that is not DRM: >> http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/. > > This use is like the joke about the dancing bear, the amazing thing isn't the > quality of the "dancing" but the fact that the bear can "dance" at all :-). > It's an impressive piece of lateral thinking, but I can't see people rushing > out to buy TPM-enabled PCs for this. I agree that it is more cute than practical. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Feb 2, 2009, at 2:29 AM, Peter Gutmann wrote: Mark Ryan presented a plausible use case that is not DRM: http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/. This use is like the joke about the dancing bear, the amazing thing isn't the quality of the "dancing" but the fact that the bear can "dance" at all :-). It's an impressive piece of lateral thinking I don't see that. The problem being solved is exactly a DRM problem: A gives B some data but wants to retain control the circumstances in which B can use that data. The algorithm proposed implements three fundamental controls: (a) B can only access the data through a particular program that A trusts; (b) B can "return" the data, along with a proof that he never actually accessed it; (c) A can then revoke B's access to the data (although the algorithm bundles this with (b)). (a) and (c) are exactly the kind of thing DRM implementations do all the time - and exactly the kind of thing that's been widely discussed for TPM. (b) is novel. DRM has to do with retaining access to data that has been provided to an untrusted party. The entertainment industry considers its customers untrusted, so TPM in its primary use cases is about controlling what those customers - i.e., all consumers of computers! - can do. In Ryan's use case, the untrusted parties are the government security services. One can construct other untrusted parties as well. In a cloud-computing world, wouldn't it be nice to know that your data, all though it's "out there", being operated on by all kinds of programs "out there", is still under your control? The problem isn't with "DRM" in the large sense - it's that once you enable "DRM" in the large sense, "DRM" in the small sense (as the entertainment industry already sees it, and as many others will once the capability is there) seems to be unavoidable. It's a matter of tradeoffs. (Notice that the same people who say this tradeoff isn't worth it will also say that the tradeoffs of broadly available crypto - yes, it protects privacy, but that includes the privacy of criminals. I don't think there's any broad principle that is being applied here - it's a case by case analysis of the good and bad effects of particular technologies. The DRM debate in particular is inherently tainted by the actions and attitudes of the entertainment industry.) -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
- Original Message - From: "Jonathan Thornburg" To: "Brian Gladman" Cc: "John Gilmore" ; "Peter Gutmann" ; ; Sent: Monday, February 02, 2009 3:53 AM Subject: Re: full-disk subversion standards released [snip] It's this variety of different software encryption schemes -- and compilers to turn them into binary code (which is what the NSA/Intel backdoor ultimately has to key on) that, I think, makes it so much harder for a hardware backdoor to work (i.e. to subvert software encryption) in this context. I well understand the difficulties of mounting attacks but the fact remains that if someone else is able to take over _control_ of your machine you won't obtain any security irrespective of whether your interest is in network or storage encryption. And _if_ Intel were to be interested in being able to take over your machine whenever it wished to do so -- which I don't believe it is -- subverting its processor designs to make this possible will be many, many orders of magnitude more effective than subverting the design of a TPM that 99.999...% of machines won't have. I am personally happy to trust Intel and I am also happy to trust the design of the TPM I happen to use. And it is completey useless for DRM provided only that Intel and the TPM supplier have not been subverted. I simply don't believe that TPM's will ever achieve (or could ever have achieved) the widespread adoption that effective DRM demands and I don't personally believe that such applications ever played much part in the design. But _provided_ the hardware suppplier can be trusted, hardware based security is able to achieve a much higher level of assurance than pure software ever can.TPMs are hence useful in custom security applications and I am personally much more confident in my security using my TPM based solution than if I would be if I were relying on a pure software approach. I am _not_ advocating TPM technology since I doubt its general utility for widespread adoption but I reject the idea that TPMs are part of an evil plot to infect the world with DRM. Brian Gladman - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Ben Laurie writes: >Apart from the obvious fact that if the TPM is good for DRM then it is also >good for protecting servers and the data on them, In which way, and for what sorts of "protection"? And I mean that as a serious inquiry, not just a "Did you spill my pint?" question. At the moment the sole significant use of TPMs is Bitlocker, which uses it as little more than a PIN-protected USB memory key and even then functions just as well without it. To take a really simple usage case, how would you: - Generate a public/private key pair and use it to sign email (PGP, S/MIME, take your pick)? - As above, but send the public portion of the key to someone and use the private portion to decrypt incoming email? (for extra points, prove that it's workable by implementing it using an actual TPM to send and receive email with it, which given the hit-and-miss functionality and implementation quality of TPMs is more or less a required second step). I've implemented PGP email using a Fortezza card (which is surely the very last thing it was ever intended for), but not using a TPM... >Mark Ryan presented a plausible use case that is not DRM: >http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/. This use is like the joke about the dancing bear, the amazing thing isn't the quality of the "dancing" but the fact that the bear can "dance" at all :-). It's an impressive piece of lateral thinking, but I can't see people rushing out to buy TPM-enabled PCs for this. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
I wrote: | Indeed, the classic question is "I've just bought this new computer | which claims to have full-disk encryption. Is there any practical | way I can assure myself that there are (likely) no backdoors in/around | the encryption?" | | For open-source software encryption (be it swap-space, file-system, | and/or full-disk), the answer is "yes": I can assess the developers' | reputations, I can read the source code, and/or I can take note of | what other people say who've read the source code. On Fri, 30 Jan 2009, Brian Gladman asked: > But, unless you are doing it with a pencil and paper, your encryption is still > being done in hardware even if you write it yourself. > > For example, why would you trust an Intel processor given that Intel is one of > the founding members of the TCG and is a major player in its activities? It's instructive to the distinction between "data in motion" encryption (for example, a network-encryption-box (NEB) and "data at rest" encryption (for example, a cryptographic filesystem): A network-encryption box: computer#1 <> NEB#1 <> ((network)) <> NEB#2 <> computer#2 plaintextciphertext ciphertext plaintext As described by Henry Spencer in http://www.sandelman.ottawa.on.ca/linux-ipsec/html/1999/09/msg00240.html it's perfectly practical for (say) the NSA to arrange for a backdoor in each NEB which occasionally leaks the keystream into the network, in a way that's very unlikely to be caught in testing, but would make it easy for an eavesdropper on the network to recover the plaintext. A cryptographic filesystem: I could imagine the NSA having arranged to plant some sort of microcode backdoor in the Pentium III processor in my laptop. (The hardest part would probably be persuading all the Intel employees involved that it wouldn't be a PR disaster for Intel if the news leaked out.) In the context of my original message, the backdoor would have to recognize the binary code sequence of the OpenBSD AES routines when invoked by the encrypting-filesystem vnode layer, and somehow compromise the security (maybe arrange to leak keystream bits into free disk sectors??). That's a tricky technical job, but I could imagine it being done, and if it's all in processor microcode, I could even imagine it having stayed a secret. But that's not good enough: What about Matt Blaze's Cryptographic File System? What about all the people using the various Linux encrypting file systems? The backdoor(s) need to cover them, too. And the MacOS ones (if there's not a software backdoor there). And all the other open-source-crypto systems. And the backdoors have to do this without compromising interoperability -- I have CFS directory trees which I created on an old Sparc that I now use on my laptop. But I think the hardest part of all is that the backdoor has to still still recognize the various crypto binary-code-sequences even when the relevant software is recompiled with a newer compiler using a different global optimizer, even though that newer compiler might not even have existed when the backdoor was inserted. It's this variety of different software encryption schemes -- and compilers to turn them into binary code (which is what the NSA/Intel backdoor ultimately has to key on) that, I think, makes it so much harder for a hardware backdoor to work (i.e. to subvert software encryption) in this context. -- -- "Jonathan Thornburg [remove -animal to reply]" Dept of Astronomy, Indiana University, Bloomington, Indiana, USA "Washing one's hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral." -- quote by Freire / poster by Oxfam - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Peter Gutmann wrote: John Gilmore writes: The theory that we should build "good and useful" tools capable of monopoly and totalitarianism, but use social mechanisms to prevent them from being used for that purpose, strikes me as naive. There's another problem with this theory and that's the practical implementation issue. I've read through... well, at least skimmed through the elephantine bulk of the TCG specs, and also read related papers and publications and talked to people who've worked with the technology, to see how I could use it as a crypto plugin for my software (which already supports some pretty diverse stuff, smart cards, HSMs, the VIA Padlock engine, ARM security cores, Fortezza cards (I even have my own USG-allocated Fortezza ID :-), and in general pretty much anything out there that does crypto in any way, shape, or form). However after detailed study of the TCG specs and discussions with users I found that the only thing you can really do with this, or at least the bits likely to be implemented and supported and not full of bugs and incompatibilities, is DRM. Apart from the obvious fact that if the TPM is good for DRM then it is also good for protecting servers and the data on them, Mark Ryan presented a plausible use case that is not DRM: http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/. I wrote it up briefly here: http://www.links.org/?p=530. As for John's original point, isn't the world full of such tools (guns, TV cameras, telephone networks, jet engines, blah blah)? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Sat, 31 Jan 2009, Peter Gutmann wrote: > Even with the best intentions in the world, the only thing you > can really usefully do with a TPM is DRM. If there were a direct link from TPM to display and speakers and all the content rendering were done by TPM itself, then TPM would be useful for DRM. An attempt to render content "securely" on CPU is based on a theory that content owner can trust general purpose OS after "secure boot". Experience shows that this theory is wishful thinking. Apparently, the only existing application of TPM is BitLocker: it allows to boot OS from an encrypted disk without entering any password. A careful analysis shows that TPM is a separate chip that can be powered down without reseting the CPU and thus one can load "non-trusted OS", reset TPM, starts "secure boot", and get encryption keys. Even when (if) TPM will be the same chip as CPU, I suspect a man-in-the-middle attack on CPU-to-RAM communication will allow to take over the "trusted OS". On the other hand, once we forget about all attestation and secure boot applications, TPM is still a smartcard soldered to your computer, so probably it can allow all the smartcard use-cases (except, of course, the uses-case that require storing the smartcard separately from the computer :-). -- Regards, ASK - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
Peter Gutmann wrote: > John Gilmore writes: > >> The theory that we should build "good and useful" tools capable of monopoly >> and totalitarianism, but use social mechanisms to prevent them from being >> used for that purpose, strikes me as naive. > > There's another problem with this theory and that's the practical > implementation issue. I've read through... well, at least skimmed through the > elephantine bulk of the TCG specs, and also read related papers and > publications and talked to people who've worked with the technology, to see > how I could use it as a crypto plugin for my software (which already supports > some pretty diverse stuff, smart cards, HSMs, the VIA Padlock engine, ARM > security cores, Fortezza cards (I even have my own USG-allocated Fortezza ID > :-), and in general pretty much anything out there that does crypto in any > way, shape, or form). However after detailed study of the TCG specs and > discussions with users I found that the only thing you can really do with > this, or at least the bits likely to be implemented and supported and not full > of bugs and incompatibilities, is DRM. > You could note a certain overlap between the promoters of Digital Content Protection and the Trusted Computing Group: http://www.digital-cp.com/about_dcp Nearly 400 leading companies license the technology, including the following: Semiconductor: PC Companies:Consumer Electronics: AMD HP Panasonic Analog Devices Microsoft Samsung Intel Lenovo Sony Silicon Image Toshiba Full List of Licensees: Click here (Fuji Xerox Co., Ltd.) https://www.trustedcomputinggroup.org/about/members/ Current Members Promoter Contributor AMD... Fujitsu Limited Panasonic Hewlett-Packard... IBM Samsung Electronics Co. Infineon ... Intel Corporation Sony Corporation Lenovo Holdings Limited... Microsoft Toshiba Corporation Seagate Technology Sun Microsystems, Inc. Wave Systems The costs and economy of scale say at some point all the disk drives will be capable of FDE, whether or not it is enabled (whether or not you pay for the 'extra' feature). The distinction is the added cost of testing the encryption versus the cost of two different testing regimes, when silicon is typically pin bound defining area and cost. The same integration cost advantages makes the like of HDMI close to zero cost to the television media consumer. Enterprise 'platform owners' have the capability of assuming control of the attestation chain, while 'personal computing' might have few opportunities other than to allow the likes of an operating system vendor to provide control 'in loco parentis' for the naive consumer. Loss of control of personal computing would come about by seduction - the offer of benefits in exchange for more of the camel edging under the tent skirt. More's the pity if it offers competitive advantage excluding open source. You'd think video content providers would be anxious for a way to provide secure delivery of content via download. Being able to stick video onto a disk protected by a plus thirteen Mage DMCA spell would be a definite benefit. I'd also imagine we'll see vulnerabilities that will allow content recovery. Getting 'secure' computing requires a secure operating system. Building a computer secure against end user tampering would incur high adoption costs that wouldn't be supportable in the marketplace. To borrow and mutilate a turn of phrase from Bruce, what we get is Kabuki security theater with the commiserate tendency toward prostitution. All that said and done, people may still well end up with better security - data encrypted at rest. I'd think fighting DRM would be a separate battle from opposing FDE. It may be worthwhile to show systemic vulnerabilities that despite the encryption endanger threaten 'content protection', because while DRM's proponents like to provide a stylized threat model the real world doesn't match up. The enterprise is able to leverage further behavioral limits on users actions during platform operation and the Trusted Computing threat model allows users within the cryptographic boundary (undoubtedly due to the cost of exclusion). Additional behavioral limits aren't available for the DRM usage model, and there is nothing stopping the malevolent end user from monitoring unencrypted data from a drive for example. Trusted Computing may never be suitable for DRM either. I'd expect an enterprise would field a careful selected configuration that they could manage to make work for their purposes. DRM has to work for
Re: full-disk subversion standards released
John Gilmore writes: >The theory that we should build "good and useful" tools capable of monopoly >and totalitarianism, but use social mechanisms to prevent them from being >used for that purpose, strikes me as naive. There's another problem with this theory and that's the practical implementation issue. I've read through... well, at least skimmed through the elephantine bulk of the TCG specs, and also read related papers and publications and talked to people who've worked with the technology, to see how I could use it as a crypto plugin for my software (which already supports some pretty diverse stuff, smart cards, HSMs, the VIA Padlock engine, ARM security cores, Fortezza cards (I even have my own USG-allocated Fortezza ID :-), and in general pretty much anything out there that does crypto in any way, shape, or form). However after detailed study of the TCG specs and discussions with users I found that the only thing you can really do with this, or at least the bits likely to be implemented and supported and not full of bugs and incompatibilities, is DRM. In all the time I've worked with crypto devices I've never seen something so totally unsuited to general-purpose crypto use as a TPM. There really is only one thing it can reliably be used for and that's DRM. Now admittedly if you look really hard you may find a particular vendor who has a hit-and-miss attempt at implementing some bits of the spec that, if you cross your eyes and squint, is almost usable for general-purpose crypto use, but that's it. Even with the best intentions in the world, the only thing you can really usefully do with a TPM is DRM. (NB: This was a few years ago, maybe things have improved since then but I haven't seen any real indication of this. Oh, and I'm not going to get into the rathole of whether the whole "attestation" thing is DRM or not, if you think it isn't then please replace all occurrences of "DRM" in the above text with "attestation"). Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Fri, Jan 30, 2009 at 04:08:07PM -0800, John Gilmore wrote: > > The theory that we should build "good and useful" tools capable of > monopoly and totalitarianism, but use social mechanisms to prevent > them from being used for that purpose, strikes me as naive. Okay. In that case, please, explain to me why you are not opposed to the the manufacture and sale of digital computers. More gently: it seems to me that there is an "only" missing from your sentence above, or else it is almost by necessity a straw-man argument: it will, if consistently applied as you have stated it, hold against various tools I do not believe you actually oppose the manufacture or sale of, such as printing presses, guns, and door locks. Many of TCG's documents purport to specify mechanisms that are in fact generally useful for beneficial purposes, such as boot-time validation of software environments, secure storage of cryptographic keys, or low-bandwidth generation of good random numbers. Do you actually mean that such things should not be built, or only that you are suspicious of TCG's intent in building them? In text I've snipped, you claimed to describe TCG's charter. I must admit that I don't know if they even actually have such a document. But, on the other hand, they describe their own purpose like this (these are their actual words): "The Trusted Computing Group (TCG) is a not-for-profit organization formed to develop, define, and promote open standards for hardware-enabled trusted computing and security technologies, including hardware building blocks and software interfaces, across multiple platforms, peripherals, and devices. TCG specifications will enable more secure computing environments without compromising functional integrity, privacy, or individual rights. The primary goal is to help users protect their information assets (data, passwords, keys, etc.) from compromise due to external software attack and physical theft." I happen to think that if those _stated_ goals were achieved, that would be a good thing, and that there are in fact hardware and software mechanisms that could help achieve them -- some of which TCG has made stabs at specifying, though they've generally missed the mark. Leaving aside your assertions about TCG's _actual_ goals -- which may be correct -- are you really of the position that what's described above, no matter who were to build it nor how well, would be only useful for "monopoly and totalitarianism"? Thor - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Fri, Jan 30, 2009 at 03:37:22PM -0800, Taral wrote: > On Fri, Jan 30, 2009 at 1:41 PM, Jonathan Thornburg > wrote: > > For open-source software encryption (be it swap-space, file-system, > > and/or full-disk), the answer is "yes": I can assess the developers' > > reputations, I can read the source code, and/or I can take note of > > what other people say who've read the source code. > > Really? What about hardware backdoors? I'm thinking something like the > old /bin/login backdoor that had compiler support, but in hardware. Plus: that's a lot of code to read! A single person can't hope to understand the tens of millions of lines of code that make up the software (and firmware, and hardware!) that they use every day on a single system. Note: that's not to say that open source doesn't have advantages over proprietary source. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
> Given such solutions, frameworks like what TCG is chartered to build are > in fact good and useful. I don't think it's right to blame the tool (or > the implementation details of a particular instance of a particular kind > of tool) for the idiot carpenter. Given the charter of TCG, to produce DRM standards, it's pretty clear what activity their tool is designed to be used for. The theory that we should build "good and useful" tools capable of monopoly and totalitarianism, but use social mechanisms to prevent them from being used for that purpose, strikes me as naive. Had you not noticed obvious indications like the corruption of the Executive Branch by NSA, RIAA and MPAA (including the shiny new president), the concurrence of the Legislative Branch in that corruption, and the toothlessness of the States and the Judicial Branch in failing to actually reign in major federal constitutional violations? Yes, I'm analogizing DRM to wiretaps and jiggered voting machines. But isn't DRM like a wiretap deep inside your computer -- a foreign agent that spies on you and reports back whatever it chooses, against your will? Worse, it's like a man-in-the-middle attack, buried inside your computer. If Hollywood succeeded in injecting DRM into all our infrastructure, who among us would seriously believe the government would not muscle its way in and start also using the DRM capabilities against the citizens? The Four Horsemen of the Infopocalypse are alive and well. Are you one of those guys in *favor* of sex offenders being allowed free access to children on the Internet, buddy? It's so simple, everyone will just prove they aren't a sex offender before being granted access. It's just like getting on a plane. (TCG has excised all mention of DRM from recent publications -- but I have the original ones, which had DRM examples explaining the motivation for why they were doing this work. I'll append one such example, for those who can't readily search the archives back to 2003. Skip down to "TCPA" in the body below.) John Message-Id: <200312162153.hbglrods029...@new.toad.com> To: Jerrold Leichter cc: cryptography@metzdowd.com, gnu Subject: Re: Difference between TCPA-Hardware and other forms of trust In-reply-to: Date: Tue, 16 Dec 2003 13:53:24 -0800 From: John Gilmore > | means that some entity is supposed to "trust" the kernel (what else?). If > | two entities, who do not completely trust each other, are supposed to both > | "trust" such a kernel, something very very fishy is going on. > > Why? If I'm going to use a time-shared machine, I have to trust that the > OS will keep me protected from other users of the machine. All the other > users have the same demands. The owner of the machine has similar demands. I used to run a commercial time-sharing mainframe in the 1970's. Jerrold's wrong. The owner of the machine has desires (what he calls "demands") different than those of the users. The users, for example, want to be charged fairly; the owner may not. We charged every user for their CPU time, but only for the fraction that they actually used. In a given second, we might charge eight users for different parts of that fraction. Suppose we charged those eight users amounts that added up to 1.3 seconds? How would they know? We'd increase our prices by 30%, in effect, by charging for 1.3 seconds of CPU for every one second that was really expended. Each user would just assume that they'd gotten a larger fraction of the CPU than they expected. If we were tricky enough, we'd do this in a way that never charged a single user for more than one second per second. Two users would then have to collude to notice that they together had been charged for more than a second per second. (Our CPU pricing was actually hard to manage as we shifted the load among different mainframes that ran different applications at different multiples of the speed of the previous mainframe. E.g. our Amdahl 470/V6 price for a CPU second might be 1.78x the price on an IBM 370/158. A user's bill might go up or down from running the same calculation on the same data, based on whether their instruction sequences ran more efficiently or less efficiently than average on the new CPU. And of course if our changed "average" price was slightly different than the actual CPU performance, this provided a way to cheat on our prices. Our CPU accounting also changed when we improved the OS's timer management, so it could record finer fractions of seconds. On average, this made the system fairer. But your application might suffer, if its pattern of context switches had been undercharged by the old algorithm.) The users had to trust us to keep our accounting and pricing fair. System security mechanisms that kept one user's files from access by another could not do this. It required actual trust, since the users didn't have access to the data required to check up on us (our entire billing logs, and our accounting software).
Re: full-disk subversion standards released
On Fri, Jan 30, 2009 at 1:41 PM, Jonathan Thornburg wrote: > For open-source software encryption (be it swap-space, file-system, > and/or full-disk), the answer is "yes": I can assess the developers' > reputations, I can read the source code, and/or I can take note of > what other people say who've read the source code. Really? What about hardware backdoors? I'm thinking something like the old /bin/login backdoor that had compiler support, but in hardware. -- Taral "Please let me know if there's any further trouble I can give you." -- Unknown - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Thu, 29 Jan 2009, John Gilmore wrote: > If it comes from the "Trusted Computing Group", you can pretty much > assume that it will make your computer *less* trustworthy. Their idea > of a trusted computer is one that random unrelated third parties can > trust to subvert the will of the computer's owner. Indeed, the classic question is "I've just bought this new computer which claims to have full-disk encryption. Is there any practical way I can assure myself that there are (likely) no backdoors in/around the encryption?" For open-source software encryption (be it swap-space, file-system, and/or full-disk), the answer is "yes": I can assess the developers' reputations, I can read the source code, and/or I can take note of what other people say who've read the source code. Alas, I can think of no practical way to get a "yes" answer to my question if the encryption is done in hardware, disk-drive firmware, or indeed anywhere except "software that I fully control". -- -- Jonathan Thornburg Dept of Astronomy, Indiana University, Bloomington, Indiana, USA "Washing one's hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral." -- quote by Freire / poster by Oxfam - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
On Thu, Jan 29, 2009 at 01:22:37PM -0800, John Gilmore wrote: > > If it comes from the "Trusted Computing Group", you can pretty much > assume that it will make your computer *less* trustworthy. Their idea > of a trusted computer is one that random unrelated third parties can > trust to subvert the will of the computer's owner. People have funny notions of "ownership", don't they? It's very clear to me that I don't own my desktop machine at my office; my employer does. But even if TCG were to punch out a useful, reasonable standard (which I do not think they have done in any case so far), the policy problem of how to ensure that my desktop machine's actual owner could enforce its ownership of that machine against me, while the retailer who sold me my desktop machine at home -- which I do own -- or for that matter the U.S. Government, can't enforce _its_ "ownership" of my own machine against me; that's a real problem, and solutions to it are useful. Given such solutions, frameworks like what TCG is chartered to build are in fact good and useful. I don't think it's right to blame the tool (or the implementation details of a particular instance of a particular kind of tool) for the idiot carpenter. Thor - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
Re: full-disk subversion standards released
If it comes from the "Trusted Computing Group", you can pretty much assume that it will make your computer *less* trustworthy. Their idea of a trusted computer is one that random unrelated third parties can trust to subvert the will of the computer's owner. John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com