Re: WYTM - "but what if it was true?"
Victor Duchovni writes: | ... | The personal ATM appliance should be difficult to tamper with and should | accept only a single set of accounts (so that stolen pin numbers are not | portable)... My personal guess is that the general purpose computer is ultimately a goner -- it will later, if not sooner, be legally treated much like having a swimming pool if not a 50 caliber. ("Honest people don't need a compiler or admin privilege" is already true in the corporate arena...) Either that, or network-admittance will become the focus of authority and liability (also already true in corporate arenas). Probably a rat hole, --dan - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
Pat Farrell wrote: "the only secure computer is turned off, unplugged, inside a SCIF and surrounded by US Marines." ... provided you can trust the marines. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
On 6/27/05, Victor Duchovni <[EMAIL PROTECTED]> wrote: > On Mon, Jun 27, 2005 at 09:58:31AM -0600, Chris Kuethe wrote: > > > And now we have a market for cracked "trusted" banking clients, both > > for phishers and lazy people... it's game copy protection wars all > > over again. :) > > > > Well cracking the bank application is not really in the user's interests > in this case. Never underestimate people's shortsightedness and laziness as motivation to defeat a security system. Sort of how laziness is a virtue of perl programmers. > My view is, that when the banking application delivery > platform becomes cheap enough (say $50 or less), it will make sense for > the bank to provide a complete ATM system (sans cash) to each user. Well, software distribution can be outsourced to AOL. :) I hate it when people say stuff like this, but: "I'm no hardware engineer, but it shouldn't be that hard to build something like a selfcontained POS pin-pad about the size of a calculator..." And as I was snickering while I wrote that, I was trying to enumerate all the hard parts - things like a tamper-resistant case, software that wasn't going to be leaking key bits, etc. > The personal ATM appliance should be difficult to tamper with and should > accept only a single set of accounts (so that stolen pin numbers are not > portable)... The latter will be easy to achieve if you can make inexpensive, robust, reliable, tamper-resistant, failsafe, userfriendly hardware. In short, it's 2-factor authentication. Knowing your PIN, and having your personal ATM appliance. -- GDB has a 'break' feature; why doesn't it have 'fix' too? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
On Mon, Jun 27, 2005 at 09:58:31AM -0600, Chris Kuethe wrote: > And now we have a market for cracked "trusted" banking clients, both > for phishers and lazy people... it's game copy protection wars all > over again. :) > Well cracking the bank application is not really in the user's interests in this case. My view is, that when the banking application delivery platform becomes cheap enough (say $50 or less), it will make sense for the bank to provide a complete ATM system (sans cash) to each user. The personal ATM appliance should be difficult to tamper with and should accept only a single set of accounts (so that stolen pin numbers are not portable)... -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
On Mon, 2005-06-27 at 10:19 -0400, John Denker wrote: > Even more compelling is: > -- obtain laptop hardware from a trusted source > -- obtain software from a trusted source > -- throw the entire laptop into a GSA-approved safe when >not being used. This is just a minor variation of an approach I heard from Carl Ellison a decade or more ago: "the only secure computer is turned off, unplugged, inside a SCIF and surrounded by US Marines." [a SCIF is a Secure Compartmentalized Information Facility, used by the US Government folks] I think we tend to accept a bit more gray in the security versus usefullness grayscale. Pat -- Pat Farrell http://www.pfarrell.com/ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
On 6/26/05, Dan Kaminsky <[EMAIL PROTECTED]> wrote: > It is not necessary though that there exists an acceptable solution that > keeps PC's with persistent stores secure. A bootable CD from a bank is > an unexpectedly compelling option, as are the sort of services we're > going to see coming out of all those new net-connected gaming systems > coming out soon. You just know that people won't want to totally reboot their machines every time they want to bank, because that'll break their excel+quicken+msmoney integrated finances. So they try make a bootable HD partition, or run it under vmware, or copy the "trusted" client off. These of course cannot be allowed by the banks if they want to preserve the illusion of their secure banking app... And now we have a market for cracked "trusted" banking clients, both for phishers and lazy people... it's game copy protection wars all over again. :) -- GDB has a 'break' feature; why doesn't it have 'fix' too? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
On 06/27/05 00:28, Dan Kaminsky wrote: ... there exists an acceptable solution that keeps PC's with persistent stores secure. A bootable CD from a bank is an unexpectedly compelling option Even more compelling is: -- obtain laptop hardware from a trusted source -- obtain software from a trusted source -- throw the entire laptop into a GSA-approved safe when not being used. This is a widely-used procedure for dealing with classified data. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
>If you are insisting that there is always >a way and that, therefore, the situation is >permanently hopeless such that the smart >ones are getting the hell out of the >Internet, I can go with that, but then >we (you and I) would both be guilty of >letting the best be the enemy of the good. > > A reasonable critique. It is not necessary though that there exists an acceptable solution that keeps PC's with persistent stores secure. A bootable CD from a bank is an unexpectedly compelling option, as are the sort of services we're going to see coming out of all those new net-connected gaming systems coming out soon. --Dan - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
Dan Kaminsky writes: | Dan-- | | I had something much more complicated, but it comes down to. | | You trust Internet Explorer. | Spyware considers Internet Explorer crunchy, and good with ketchup. | Any questions? | | A little less snarkily, Spyware can trivially use what MS refers to | as a Browser Helper Object (BHO) to alter all traffic on any web page. | Inserting a 1x1 iframe in the corner of whatever, that does nothing but | transmit upstream data via HTTP image GETs, is trivial. And if HTTP is | a bit too protected -- there's *always* DNS ;). gethostbyname indeed. | | P.S. Imagine for a moment it was profitable to give people cancer. No, | not just a pesky side effect, but kind of the idea. Angiostatin | wouldn't stand a chance. | If you are insisting that there is always a way and that, therefore, the situation is permanently hopeless such that the smart ones are getting the hell out of the Internet, I can go with that, but then we (you and I) would both be guilty of letting the best be the enemy of the good. However, I/we routinely disable all use of BHOs, prevent mod of any entity as chosen by filename extension, checksum, or filesystem location, and whitelist applications, to name a _few_. For the genuinely paranoid, regular (like every few hours) reboot to a new VM is also enforceable and recommended, especially if you care about attacks that are purely in-memory and which do not leave behind any payload such as to aid an attacker on his/her proposed second visit. If you indeed are an "I don't need no stinkin' payload" sort of guy, like the folks who eschew carrying matches because you can always light a fire rubbing two sticks together, make me a suggestion; I love free consulting. --dan = "Internet Explorer is the most dangerous program ever written." -- Rik Farrow to Scott Charney during the audience grilling stage of http://www.usenix.org/events/usenix04/tech/sigs.html#mono_debate - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
Dan-- I had something much more complicated, but it comes down to. You trust Internet Explorer. Spyware considers Internet Explorer crunchy, and good with ketchup. Any questions? A little less snarkily, Spyware can trivially use what MS refers to as a Browser Helper Object (BHO) to alter all traffic on any web page. Inserting a 1x1 iframe in the corner of whatever, that does nothing but transmit upstream data via HTTP image GETs, is trivial. And if HTTP is a bit too protected -- there's *always* DNS ;). gethostbyname indeed. --Dan P.S. Imagine for a moment it was profitable to give people cancer. No, not just a pesky side effect, but kind of the idea. Angiostatin wouldn't stand a chance. [EMAIL PROTECTED] wrote: >What do you tell people to do? > > > >Defense in depth, as always. As an officer at >Verdasys, data-offload is something we block >by simply installing rules like "Only these >two trusted applications can initiate outbound >HTTP" where the word "trusted" means checksummed >and the choice of HTTP represents the most common >mechanism for spyware, say, to do the offload >of purloined information. Put differently, >if there 5,000 diseases but only two symptoms, >then symptomatic relief is the more cost-effective >approach rather than cure. In this case, why do >I care if I have spyware if it can't talk to its >distant master? (Why do I care if I have a tumor >if angiostatin keeps it forever smaller than 1mm >in diameter?) Of course, there are details, and, >of course, I am willing to discuss them at far >greater length. > > > > >--dan > > >- >The Cryptography Mailing List >Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] > > - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
What do you tell people to do? Defense in depth, as always. As an officer at Verdasys, data-offload is something we block by simply installing rules like "Only these two trusted applications can initiate outbound HTTP" where the word "trusted" means checksummed and the choice of HTTP represents the most common mechanism for spyware, say, to do the offload of purloined information. Put differently, if there 5,000 diseases but only two symptoms, then symptomatic relief is the more cost-effective approach rather than cure. In this case, why do I care if I have spyware if it can't talk to its distant master? (Why do I care if I have a tumor if angiostatin keeps it forever smaller than 1mm in diameter?) Of course, there are details, and, of course, I am willing to discuss them at far greater length. --dan - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
Allan Liska wrote: 3. Use an on-screen keyboard. For extra points, try Dasher. http://www.inference.phy.cam.ac.uk/dasher/ -- >>>ApacheCon Europe<<< http://www.apachecon.com/ http://www.apache-ssl.org/ben.html http://www.thebunker.net/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
Adam Shostack wrote: On Wed, Jun 22, 2005 at 01:54:34PM +0100, Ian Grigg wrote: | A highly aspirated but otherwise normal watcher of black helicopters asked: | | > Any idea if this is true? | > (WockerWocker, Wed Jun 22 12:07:31 2005) | > http://c0x2.de/lol/lol.html | | Beats me. But what it if it was true. What's your advice to | clients? "Duuude, stop buying Dell." The Secret service isn't part of DHS. DHS seal is different. The photos don't really show that cable in a laptop, and if they do, they don't show it in a Dell laptop. Correction: The secret service IS part of DHS. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] -- Best Regards, Lance James Secure Science Corporation www.securescience.net Author of 'Phishing Exposed' http://www.securescience.net/amazon/ Find out how malware is affecting your company: Get a DIA account today! https://slam.securescience.com/signup.cgi - it's free! - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
Ian Grigg wrote: A highly aspirated but otherwise normal watcher of black helicopters asked: Any idea if this is true? (WockerWocker, Wed Jun 22 12:07:31 2005) http://c0x2.de/lol/lol.html Beats me. But what it if it was true. What's your advice to clients? First up, it certainly is not true, the images are just ripped from here: http://www.dansdata.com/keyghost.htm To the question at hand, unless you built the hardware (or are an electrical engineer and inspected it all), you cannot fully trust it. No different than trusting that a compiler is not putting malicious code into programs it compiled unless you inspect the disassembled binary (with a disassembler you wrote, using a compiler you wrote, on hardware you built, etc.) I would however assume that if something like this were happening, it would not be on a "stick out like a sore thumb" board stuck inside a PC, it would be embedded inside a chip that is supposed to be there. -- Mark Allen Earnest Lead Systems Programmer Emerging Technologies The Pennsylvania State University Lt Commander Centre County Sheriff's Office Search and Rescue KB3LYB smime.p7s Description: S/MIME Cryptographic Signature
Re: WYTM - "but what if it was true?"
On Wed, 22 Jun 2005, Ian Grigg wrote: A highly aspirated but otherwise normal watcher of black helicopters asked: Any idea if this is true? (WockerWocker, Wed Jun 22 12:07:31 2005) http://c0x2.de/lol/lol.html googling 'dell keylogger' certainly turns up a lot of sites who insist that this is a hoax. Beats me. But what it if it was true. What's your advice to clients? Um, be suspicious of keyboard changes? Bring your own if you're paranoid? Remove any "harmless" extension cables that suddenly attach themselves? Most server-class hardware already has 'tamper-detection' hardware that will warn you if the machine has been opened, which should cover the "internal keylogger" case... --scott COBRA JUDY FSF HTKEEPER RNC Philadelphia Treasury Flintlock NRA HTAUTOMAT Cheney AEROPLANE LIONIZER Marxist ESCOBILLA PANCHO ESQUIRE JMWAVE ( http://cscott.net/ ) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
It is most likely a hoax: http://www.boingboing.net/2005/06/16/conspiracy_theory_of.html As to your second question. There are several options available to you depending on your level of paranoia: 1. Run a personal firewall (assuming you can find one that doesn't have a trojan that talks back to the manufacturer:cough: zone alarm :cough:). 2. Monitor and review all traffic that flows from your ethernet card using Ethereal, TCPDump or some other program. 3. Use an on-screen keyboard. allan On Jun 22, 2005, at 8:54 AM, Ian Grigg wrote: A highly aspirated but otherwise normal watcher of black helicopters asked: Any idea if this is true? (WockerWocker, Wed Jun 22 12:07:31 2005) http://c0x2.de/lol/lol.html Beats me. But what it if it was true. What's your advice to clients? iang -- Advances in Financial Cryptography, Issue 1: https://www.financialcryptography.com/mt/archives/000458.html Daniel Nagy, On Secure Knowledge-Based Authentication Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products Ian Grigg, Pareto-Secure - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM - "but what if it was true?"
On Wed, Jun 22, 2005 at 01:54:34PM +0100, Ian Grigg wrote: | A highly aspirated but otherwise normal watcher of black helicopters asked: | | > Any idea if this is true? | > (WockerWocker, Wed Jun 22 12:07:31 2005) | > http://c0x2.de/lol/lol.html | | Beats me. But what it if it was true. What's your advice to | clients? "Duuude, stop buying Dell." The Secret service isn't part of DHS. DHS seal is different. The photos don't really show that cable in a laptop, and if they do, they don't show it in a Dell laptop. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On Tue, 21 Oct 2003 15:02:14 +1300, Peter Gutmann said: > Are there any known servers online that offer X.509 (or PGP) mechanisms in > their handshake? Both ssh.com and VanDyke are commercial offerings so it's > not possible to look at the source code to see what they do, and I'm not sure Joel N. Weber II developed PGP patches for OpenSSH: http://www.red-bean.com/~nemo/openssh-gpg/ and I am pretty sure that he has a server up somewhere. Werner -- Werner Koch <[EMAIL PROTECTED]> The GnuPG Expertshttp://g10code.com Free Software Foundation Europe http://fsfeurope.org - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Thor Lancelot Simon <[EMAIL PROTECTED]> writes: >I believe the VanDyke implementation also supports X.509, and interoperates >with the ssh.com code. It was also my perception that, at the time, the >VanDyke guy was basically shouted down when trying to discuss the utility of >X.509 for this purpose and put his marbles back in his cloth sack and went >home. Are there any known servers online that offer X.509 (or PGP) mechanisms in their handshake? Both ssh.com and VanDyke are commercial offerings so it's not possible to look at the source code to see what they do, and I'm not sure that I want to run the gauntlet of getting some sample copy of a commercial app (if they're available) and figuring out how to set it up to work with certs just to see what the data format is supposed to be... Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On Sun, Oct 19, 2003 at 01:42:34AM -0600, Damien Miller wrote: > On Sun, 2003-10-19 at 00:47, Peter Gutmann wrote: > > > >What was the motive for adding lip service into the document? > > > > So that it's possible to claim PGP and X.509 support if anyone's interested in > > it. It's (I guess) something driven mostly by marketing so you can answer > > "Yes" to any question of "Do you support ". You can find quite a number of > > these things present in various security specs, it's not just an SSH thing. > > I think that you are misrepresenting the problem a little. At > least one vendor (ssh.com) has a product that supports both X.509 > and PGP, so the inclusion of these in the I-D is not just marketing > overriding reality - just a lack of will on part of the the draft's > authors. I believe the VanDyke implementation also supports X.509, and interoperates with the ssh.com code. It was also my perception that, at the time, the VanDyke guy was basically shouted down when trying to discuss the utility of X.509 for this purpose and put his marbles back in his cloth sack and went home. I see lack of any chained trust mechanism as _the_ major weakness of the SSH protocol. X.509 is not exactly pleasant, but it is what has emerged as the standard for identity certificates and it is functional for that purpose, and there are many implementations available; there are even multiple implementations available for the SSH protocol. I have to regard the lack of certificate/chain-of-trust support in the SSH protocol as a highly negative result of a knee-jerk reaction to the very _mention_ of an X.500 series standard on the working group mailing list, by people who did not offer any functional alternative seemingly because they thought the laughable status quo ante -- with *no* way to validate the certificate presented by a given peer on initial contact -- was fine. It's a shame that dsniff and the other toolkits for attacking that protocol weakness did not exist at the time. Thor - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On Sun, 2003-10-19 at 00:47, Peter Gutmann wrote: > >What was the motive for adding lip service into the document? > > So that it's possible to claim PGP and X.509 support if anyone's interested in > it. It's (I guess) something driven mostly by marketing so you can answer > "Yes" to any question of "Do you support ". You can find quite a number of > these things present in various security specs, it's not just an SSH thing. I think that you are misrepresenting the problem a little. At least one vendor (ssh.com) has a product that supports both X.509 and PGP, so the inclusion of these in the I-D is not just marketing overriding reality - just a lack of will on part of the the draft's authors. I have seen little involvement on the secsh wg mailing list by the ssh.com people since the public spat about trademark rights over "ssh" a few years back. Since noone else implements these two public key methods, the work has never been done. IIRC The wg decided to punt the issue to a separate draft if it ever arose again. It hasn't in two years. In the meantime, everyone involved seems to have become deathly afraid of touching the draft so as not to impede its glacial progress through the IETF on its way to RFC-hood. Whether a sizeable number of customers acutally use certificates for ssh is another matter. IMO The only real use for certs in ssh is the issue of initial server authentication. If one wants to use certificates to facilitate this process, they can already - just publish the server keys on a https server somewhere and/or sign them with PGP :) -d - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Ian Grigg <[EMAIL PROTECTED]> writes: >So, in reality, the spec does not specify, even if it uses the words? OK, so >there is no surprise if there is no takeup. Actually I think the main reason was that there's virtually no interest in this. >What was the motive for adding lip service into the document? So that it's possible to claim PGP and X.509 support if anyone's interested in it. It's (I guess) something driven mostly by marketing so you can answer "Yes" to any question of "Do you support ". You can find quite a number of these things present in various security specs, it's not just an SSH thing. To give an example from the home court (and avoid picking on other people's designs :-), I've been advertising ECC support in my code for years. After three years of the code being present and a total of zero requests for its use, I removed it because it was a pain to maintain (I also changed the text at that point to say that it was optional/available on request). It's now been another three years and I'm still waiting for someone to say they actually want to use it. There has been the odd inquiry about potential availability where I was able to say that it's available as an option, at that point the user can fill in the appropriate checkbox in the RFP and forget about it. (Just to add a note here before people leap in with "But XYZ uses ECC crypto!", it's only really used in vertical-market apps. To use it in general you need to know how to get it into a cert (data formats, parameters, and so on), find a CA to issue you the cert, figure out how to use it with SSL or PGP or whatever, find some other implementation that agrees with what your implementation is doing, etc etc etc. This is why there's so little interest, not because of some conspiracy to supress ECCs. For a more general discussion of this problem, see "Final Thoughts" in the Crypto Gardening Guide). Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Damien Miller <[EMAIL PROTECTED]> writes: >The SSH protocol supports certificates (X.509 and OpenPGP), though most >implementations don't. One of the reason why many implementations may not support it is that the spec is completely ambiguous as to the data formats being used. For example it specifies the signature blob format as "an X.509 signature", which could be about half a dozen different things. Same with PGP signatures, for which there's even more possibilities. In addition since almost nothing implements them, it's not possible to get test data from someone else's server to see what they're doing (hmm, and even if there was there's no way to tell whether their interpretation would match someone else's). Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On Fri, 2003-10-17 at 00:58, John S. Denker wrote: > Tangentially-related point about credentials: > > In a previous thread the point was made that > anonymous or pseudonymous credentials can only > say positive things. That is, I cannot discredit > you by giving you a discredential. You'll just > throw it away. If I somehow discredit your > pseudonym, you'll just choose another and start > over. > > This problem can be alleviated to some extent > if you can post a fiduciary bond. Then if you > do something bad, I can demand compensation from > the agency that issued your bond. If this > happens a lot, they may revoke your bond. That > is, you can be discredited by losing a credential. > > This means I can do business with you without > knowing your name or how to find you. I just > need to trust the agency that issued your bond. > The agency presumably needs to know a lot about > you, but I don't. One can claim this is what a credit card does for the consumer the name on the card is somewhat tangential to it being a credential; it is there so that the merchant can authenticate the credential by cross checking the name on the card with names on other credentials that you might be carrying. If you have enuf credentials with the same name ... then it eventually satisfies the merchant that it is your credential. Some number of places are taking the name off the card as part of improving consumer privacy at point-of-sale. They can do this with debit where the PIN is a substitution for otherwise proving it is your credential. however, as previously posted there is a lot of skimming going an with the information for making a counterfeit card as well as skarfing up the corresponding PIN is being done. This is also being done with some kinds of chip cards where a PIN is involved but since the infrastructure "trusts" the cards the counterfeit cards are programmed to accept any PIN see the "yes card" at the bottom of the following URL. http://www.smartcard.co.uk/resources/articles/cartes2002.html The issue is that technique used to skim static data for making counterfeit magstripe cards also applies to skimming static data for making counterfeit "yes cards". The claim in X9.59 is that the signature from something like an asuretee card ... can both demonstrate two (or three) factor authentication as well as proving that the transaction hasn't been tampered with since it was signed. In this case, while the card may still look like an (offline) credential from pre-1970s (with printed credential revokation lists mailled out every month to all merchants) it, in fact does an online transaction. The digital signature proving 2/3 factor authenticaiton (and no transaction tampering during transit) is then accepted (or not) by the financial institution which reports back real-time result to the relying party (merchant). This is a move from the ancient offline paradigm that has been going on for hundreds of years (with credentials as substitute for real-time interaction) to an online paradigm. While the form-factor may still appear the same as the rapidly becoming obsolete offline credential; it is actually operating as a long-distance 2/3 factor authentication mechanism between the consumer and their financial institution with the merchant/relying-party getting back a real-time response as to whether the institution stands behind the request. The difference between the x9.59/asuretee implementation and the "yes card" implementation is that there is no static data to skim (and use for creating counterfeit cards/transactions). misc. x9.59 refs: http://www.garlic.com/~lynn/index.html#x959 misc. aads chip strawman & asuretee refs: http://www.garlic.com/~lynn/index.html#aads The integrity of the chipcard and the integrity of the digital signature substitutes for requiring the merchants to cross-check the name on the card with the names on an arbitrary number of other "credentials" until they are comfortable performing the transaction. The current (non-PIN card) infrastructure is sort of half way between the old style "everything is a credential" and the new "everything is onlin"e to a fully trusted online infrastructure. The magstripe does an online transaction and the institution will approve the transactions with some number of caveats regarding it not being a counterfeit/fraudulent transaction. For the non-PIN transactions, the merchant (can) uses the name on the card to cross check with as many other credential names until the merchant becomes comfortable. This is similar to the scenario with the existing SSL domain name certificate issuing process (using names mapping to common/real-world identities in order to achieve authentication). The domain name system registers the owner's name. The CA SSL certificate issuer obtains a name of the certificate requester and then the CA attempts to map the two names into the same real world identities as a mea
Re: WYTM?
On Mon, 2003-10-13 at 20:27, Ian Grigg wrote: > The situation is so ludicrously unbalanced, that if > one really wanted to be serious about this issue, > instead of dismissing certs out of hand (which would > be the engineering approach c.f., SSH), one would > run ADH across the net and wait to see what happened. I don't think that this is an accurate characterisation of the situation wrt SSH. The SSH protocol supports certificates (X.509 and OpenPGP), though most implementations don't. Around a year ago, Markus Friedl posted patches to enable X.509 certs for OpenSSH, but there was little interest. Also, SSH is somewhere between the two extremes of ADH and the PKIish hierarchial trust. Protocol 2 uses DH, so you have the PFS properties, but most implementations offer better opportunities for key verification than the popular SSL implementations (in web browsers). E.g. I don't recall a web browser offering a fingerprint for a private key, except behind a number of confusing dialogs, nor present me with ALL CAPS warnings when webservers change their keys. -d - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On 10/16/2003 07:19 PM, David Honig wrote: > > it would make sense for the original vendor website (eg Palm) > to have signed the "MITM" site's cert (palmorder.modusmedia.com), > not for Verisign to do so. Even better, for Mastercard to have signed > both Palm and palmorder.modusmedia.com as well. And Mastercard to > have printed its key's signature in my monthly paper bill. Bravo. Those are golden words. Let me add my few coppers: 1) This makes contact with a previous thread wherein the point was made that people often unwisely talk about identities when they should be talking about credentials aka capabilities. I really don't care about the identity of the order-taking agent (e.g. palmorder.modusmedia.com). What I want to do is establish the *credentials* of this *session*. I want a session with the certified capability to bind palm.com to a contract, and the certified capability to handle my credit-card details properly. 2) We see that threat models (as mentioned in the Subject: line of this thread), while an absolutely vital part of the story, are not the whole story. One always needs a push-pull approach, documenting the good things that are supposed to happen *and* the bad things that are supposed to not happen (i.e. threats). 3) To the extent that SSL focuses on IDs rather than capabilities, IMHO the underlying model has room for improvement. 4a) This raises some user-interface issues. The typical user is not a world-class cryptographer and may not have a clear idea just what ensemble of credentials a given session ought to have. This is not a criticism of credentials; the user doesn't know what ID the session ought to have under the current system, as illustrated by the Palm example. The point is that if we want something better than what we have now, we have a lot of work to do. 4b) As a half-baked thought: One informal intuitive notion that users have is that if a session displays the MasterCard *logo* it must be authorized by MasterCard. This notion is enforceable by law in the long run. Can we make it enforceable cryptographically in real time? Perhaps the CAs should pay attention not so much to signing domain names (with some supposed responsibility to refrain from signing abusively misspelled names e.g. pa1m.com) but rather more to signing logos (with some responsibility to not sign bogus ones). Then the browser (or other user interface) should to verify -- automatically -- that a session that wishes to display certain logos can prove that it is authorized to do so. If the logos check out, they should be displayed in some distinctive way so that a cheap facsimile of a logo won't be mistaken for a cryptologically verified logo. Even if you don't like my half-baked proposal (4b) I hope we can all agree that the current ID-based system has room for improvement. = Tangentially-related point about credentials: In a previous thread the point was made that anonymous or pseudonymous credentials can only say positive things. That is, I cannot discredit you by giving you a discredential. You'll just throw it away. If I somehow discredit your pseudonym, you'll just choose another and start over. This problem can be alleviated to some extent if you can post a fiduciary bond. Then if you do something bad, I can demand compensation from the agency that issued your bond. If this happens a lot, they may revoke your bond. That is, you can be discredited by losing a credential. This means I can do business with you without knowing your name or how to find you. I just need to trust the agency that issued your bond. The agency presumably needs to know a lot about you, but I don't. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Hopefully everyone realizes this, but just for the record, I didn't write the lines apparently attributed to me below -- I was quoting Bruce Schneier. By the way, I strongly agree with David Honig's point that the wrong entities are doing the signing. Regards, Bryce O'Whielacronx David Honig <[EMAIL PROTECTED]> wrote: > > At 01:51 PM 10/16/03 -0400, Bryce O'Whielacronx wrote: > > I doubt it. It's true that VeriSign has certified this > man-in-the-middle > > attack, but no one cares. > > Indeed, it would make sense for the original vendor website (eg Palm) > to have signed the "MITM" site's cert (palmorder.modusmedia.com), > not for Verisign to do so. Even better, for Mastercard to have signed > both Palm and palmorder.modusmedia.com as well. And Mastercard to > have printed its key's signature in my monthly paper bill. > > > (This is aside your main point about it being Mastercard et al. > doing the checking/backup for the customer, not certs.) > > > > > - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
At 01:51 PM 10/16/03 -0400, Bryce O'Whielacronx wrote: > I doubt it. It's true that VeriSign has certified this man-in-the-middle > attack, but no one cares. Indeed, it would make sense for the original vendor website (eg Palm) to have signed the "MITM" site's cert (palmorder.modusmedia.com), not for Verisign to do so. Even better, for Mastercard to have signed both Palm and palmorder.modusmedia.com as well. And Mastercard to have printed its key's signature in my monthly paper bill. (This is aside your main point about it being Mastercard et al. doing the checking/backup for the customer, not certs.) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
I am very much enjoying the discussion about threat models, web stores, etc. I'm interested to see a continual influx of spoofed e-mail from e-gold.com in my inbox, instructing me to click here to verify the safety of my account. Here is a good rant from Schneier's "Secrets and Lies". From Chapter 15, "Certificates and Credentials", section "PKIs On The Internet" (page 238). I will quote here the entire section. The first couple of paragraphs are old hat to this audience, but if you haven't read this before then read it now. Regards, Zooko """ PKIS ON THE INTERNET Most people's only interaction with a PKI is using SSL. SSL secures web transactions, and sometimes PKI vendors point to it as enabling technology for electronic commerce. This argument is disingenuous; no one is turned away at an online merchant for not using SSL. SSL does encrypt credit card transactions on the Internet, but it is not the source of security for the participants. That security comes from credit card company procedures, allowing a consumer to repudiate any line item charge before paying the bill. SSL protects the consumer from eavesdroppers, it does not protect against someone breaking into the Web site and stealing a file full of credit card numbers, nor does it protect against a rogue employee at the merchant harvesting credit card numbers. Credit card company procedures protest against those threats. PKIs are supposed to provide authentication, but they don't even do that. Example one: the company F-Secure (formerly Data Fellows) sells software from its Web site at www.datafellows.com. If you click to buy software, you are redirected to the Web site www.netsales.net, which makes an SSL connection with you. The SSL certificate was issued to "NetSales, Inc., Software Review LLC" in Kansas. F-Secure is headquartered in Helsinki and San Jose. By any PKI rules, no one should do business with this site. The certificate received is not from the same company that sells the software. This is exactly what a man-in-the-middle attack looks like, and exactly what PKI is supposed to prevent. Example two: I visited www.palm.com to purchase something for my PalmPilot. When I went to the online checkout, I was redirected to https://palmorder.modusmedia.com/asp/store.asp. The SSL certificate was registered to Modus Media International; clearly a flagrant attempt to defraud Web customers, which I deftly uncovered because I carefully checked the SSL certificate. Not. Has anyone ever sounded the alarm in these cases? Has anyone not bought online products because the name of the certificate didn't match the name on the Web site? Has anyone but me even noticed? I doubt it. It's true that VeriSign has certified this man-in-the-middle attack, but no one cares. I made my purchases anyway, because the security comes from credit card rules, not from the SSL. My maximum liability from a stolen card is $50, and I can repudiate a transaction if a fraudulent merchant tries to cheat me. As it is used, with the average used not bothering to verify the certificates exchanged and no revocation mechanism, SSL is just simply a (very slow) Diffie-Hellman key-exchange method. Digital certificates provide no actual security for electronic commerce; it's a complete sham. """ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Ian Grigg <[EMAIL PROTECTED]> writes: > So to say that ITM is consensus is something > that is going to have to be established. Most comsec people I know subscribe to it. I don't have a study to show it. > In this case, the ITM was a) agreed upon after > the fact to fill in the hole I don't know what this means. If you'd asked a bunch of comsec people what the appropriate threat model for SSL was, they would have given you something very much like SSL. > > > (Actually, I'm not sure what SSH pops up, it's > > > never popped up anything to me? Are you talking > > > about a windows version?) > > SSH in terminal mode says: > > > > "The authenticity of host 'hacker.stanford.edu (171.64.78.90)' can't be > > established. > > RSA key fingerprint is d3:a8:90:6a:e8:ef:fa:43:18:47:4c:02:ab:06:04:7f. > > Are you sure you want to continue connecting (yes/no)? " > > > > I actually find the Firebird popup vastly more understandable > > and helpful. > > > I'm not sure I can make much of your point, > as I've never heard of nor seen a Firebird? What, you've never heard of Google? Mozilla is effectively slimmed down Mozilla. The Mozilla dialog is somewhat more aggressive. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Jon Snader wrote: > I don't understand this. Let's suppose, for the > sake of argument, that MitM is impossible. It's > still trivially easy to make a fake site and harvest > sensitive information. If we assume (perhaps erroneously) > that all but the most naive user will check that they > are talking to a ``secure site'' before they type in > that credit card number, doesn't the cert provide assurance > that you're talking to whom you think you are? It's not *that* difficult to obtain a certificate for something involving a well-known brand. The certificate generation process appears to be fully automated, and we know that it has already failed. Furthermore, the certificate says nothing about the contents of the site. You can register something like REFRESH-ACCOUNT.COM and collect passwords using an Ebay or AOL imitation, and none of the SSL CAs will refuse to certify your key material for use with REFRESH-ACCOUNT.COM. So why do we see so little fraud involving HTTPS sites? I'd guess that's because the current social engineering tactics are effective without the "https://"; mark. Most users look for assurances of their privacy, and if the web site says "128 bit encrypted", they feel safe, indepedent of the actual transport channel. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Ian Grigg wrote: Eric Rescorla wrote: Ian Grigg <[EMAIL PROTECTED]> writes: I actually find the Firebird popup vastly more understandable and helpful. I'm not sure I can make much of your point, as I've never heard of nor seen a Firebird? I believe he's talking about Mozilla Firebird... which is the project that is splitting the Mozilla (everything including the kitchen sink) browser into seperate e-mail (Thunderbird) and web browser (Firebird) applications. http://www.mozilla.org/products/firebird/ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Jon Snader wrote: > > On Mon, Oct 13, 2003 at 06:49:30PM -0400, Ian Grigg wrote: > > Yet others say "to be sure we are talking > > to the merchant." Sorry, that's not a good > > answer either because in my email box today > > there are about 10 different attacks on the > > secure sites that I care about. And mostly, > > they don't care about ... certs. But they > > care enough to keep doing it. Why is that? > > > > I don't understand this. Let's suppose, for the > sake of argument, that MitM is impossible. It's > still trivially easy to make a fake site and harvest > sensitive information. Yes. This is the attack that is going on. This is today's threat. (In that it is a new threat. The old threat still exists - hack the node.) > If we assume (perhaps erroneously) > that all but the most naive user will check that they > are talking to a ``secure site'' before they type in > that credit card number, doesn't the cert provide assurance > that you're talking to whom you think you are? Nope. It would seem that only the more sophisticated users can be relied upon to correctly check that they are at the correct secure site. In practice almost all of these attacks bypass any cert altogether and do not use an SSL protected HTTPS site. They use a variety of techniques to distract the attention of the user, some highly imaginative. For example, if you target the right browser, then it is possible to popup a box that covers the appropriate parts. Or to put a display inside the window that duplicates the browser display. Or the URL is one of those with strange features in there or funny letters that look like something else. In practice, these attacks are all statistical, they look close enough, and the fool some of the people some of the time. Finally, just in the last month, they have also started doing actual cert spoofs. This was quite exciting to me to see a spoof site using a cert, so I went in and followed it. Hey presto, it showed me the cert, as it said it was wrong! So I clicked on the links and tried to see what was wrong. Here's the interesting thing: I couldn't easily tell, and my first diagnosis was wrong. So then I realised that *even* if the spoof is using a cert, the victim falls to a confusion attack (see Tom Weinstein's comments on bad GUIs). (But, for the most part, 95% or so ignore the cert, and the user may or may not notice.) Now, we have no statistics on how many of these attacks work, other than the following: they keep happening, and with increasing frequency over time. >From this I conclude they are working, enough to justify the cost of the attack at least. I guess the best thing to say is that the raw claim that the cert ensures that you are talking to the merchant is not 100% true. It will help a sophisticated user. An attack will bypass some of the users a lot. It might fool many of the users only occasionally. > If the argument is that Verisign and the others don't do > enough checking before issuing the cert, I don't see > how that somehow means that SSL is flawed. SSL isn't flawed, per se. It's just not appropriately being used in the secure browser application. It's fair to say that its use is misaligned to requirements, and a lot of things could be done to improve matters. But, one of the perceptions that exist in the browser world is that SSL secures ecommerce. Until that view is rectified, we can't really build the consensus to have efforts like Ye & Smith, and Close, and others, be treated as serious and desirable. (In practice, I don't think it matters how Verisign and others check the cert. This is shown by the fact that almost all of these attacks have bypassed the cert altogether.) iang http://www.iang.org/ssl/maginot_web.html - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On Mon, Oct 13, 2003 at 10:27:45PM -0400, Ian Grigg wrote: > The situation is so ludicrously unbalanced, that if > one really wanted to be serious about this issue, > instead of dismissing certs out of hand (which would > be the engineering approach c.f., SSH), one would > run ADH across the net and wait to see what happened. > > Or, spit credit cards in open HTTP, and check how > many were tried by credit card snafflers. You might > be waiting a long time :-) But, that would be a > serious way for credit card companies to measure > whether they care one iota about certs or even > crypto at all. > You're probably right about waiting a long time, but might that be because trying to sniff credit card numbers is not worth it? Not worth it because virtually everyone uses SSL when making on-line purchases. If everyone stopped using SSL, would we not expect to see an increase in credit card sniffing? Since, as you say, sniffing on the wire is harder than compromising the end nodes, the bad guys naturally go after the low hanging fruit, especially since a great deal of the ``interesting'' traffic is cryptographically protected (or at least hardened). *Of course* SSL isn't a complete security solution, but it is effective in solving part of the problem; perhaps so well that it makes it appear as if the problem doesn't exist. jcs - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
On Mon, Oct 13, 2003 at 06:49:30PM -0400, Ian Grigg wrote: > Yet others say "to be sure we are talking > to the merchant." Sorry, that's not a good > answer either because in my email box today > there are about 10 different attacks on the > secure sites that I care about. And mostly, > they don't care about ... certs. But they > care enough to keep doing it. Why is that? > I don't understand this. Let's suppose, for the sake of argument, that MitM is impossible. It's still trivially easy to make a fake site and harvest sensitive information. If we assume (perhaps erroneously) that all but the most naive user will check that they are talking to a ``secure site'' before they type in that credit card number, doesn't the cert provide assurance that you're talking to whom you think you are? If the argument is that Verisign and the others don't do enough checking before issuing the cert, I don't see how that somehow means that SSL is flawed. jcs - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Ian Grigg wrote: Cryptography is a special product, it may appear to be working, but that isn't really good enough. Coincidence would lead us to believe that clear text or ROT13 were good enough, in the absence of any attackers. For this reason, we have a process. If the process is not followed, then coincidence doesn't help to save our bacon. It has to follow, for it to be valuable. If it doesn't follow, to treat it as anything other than a mere coincidence to be dismissed out of hand is leading us on to make other errors. I think that Matt Blaze said it fairly well. There are some security practices that in the recent past are now considered appalling. It's time to be a little bit appalled, and to recognise SSL for what it is - a job that survived not on its cryptographic merits, but through market and structural conditions at the time. SSL/TLS is not a complete security solution. It is a building block. It is a protocol for communication between two end points. As such, its threat model deals with threats involving that communication. It does not deal with the security of the end point, because if you can compromise the machine that the software trying to communicate is running on, then no protocol can provide you with any level of security. You might choose to argue that a communications protocol is not what we need, but that would have nothing to do with the threat model that SSL/TLS is designed around. It seems what you're criticizing here is the Netscape and Microsoft client/server HTTPS-based security solutions for electronic commerce. These are certainly built using SSL/TLS as a building block, but criticisms of their design have very little relevance for SSL/TLS itself. Here's specifically what the server does: When it is installed, it doesn't also install and start up the SSL server. You know that page that has the feather on? It should also start up on the SSL side as well, perhaps with a different colour. Specifically, when you install the server, it should create a self-signed certificate and use it. Straight away. No questions asked. Then, it becomes an administrator issue to replace that with a custom signed one, if the admin guy cares. This really has nothing to do with TLS. If you don't like the installation process for Apache, you could fix it and send the patches back, or you could write your own web server. There should be no dialogue at all. Going from HTTP to HTTPS/self signed is a mammoth increase in security. Why does the browser say it is less/not secure? Further, the popups are a bad way to tell the user what the security level is. The user can't grok them and easily mucks up on any complex qeustions. There needs to be a security display on the secured area that is more prominent and also more graded (caching numbers) than the current binary lock symbol. The security UI for netscape/mozilla has always been terrible. IMHO, designing a user-friendly UI for crypto stuff that doesn't compromise security has been (and continues to be) the greatest obstacle to getting people to use this stuff. -- Give a man a fire and he's warm for a day, but set | Tom Weinstein him on fire and he's warm for the rest of his life. | [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Tim Dierks wrote: > > At 12:28 AM 10/13/2003, Ian Grigg wrote: > >Problem is, it's also wrong. The end systems > >are not secure, and the comms in the middle is > >actually remarkably safe. > > I think this is an interesting, insightful analysis, but I also think it's > drawing a stronger contrast between the real world and the Internet threat > model than is warranted. > > It's true that a large number of machines are compromised, but they were > generally compromised by malicious communications that came over the > network. If correctly implemented systems had protected these machines from > untrustworthy Internet data, they wouldn't have been compromised. The point is, any compromise of any system is more likely to come from a node compromise than a wire compromise. How much more likely? We don't know for sure, but I'd say it is in the many thousand times as much. E.g., look at those statistics. Basically, the wire threat is unmeasurable - there are no stats that I've ever seen, and the node compromise is subject of some great scrutiny, not to mention 13,000 odd Linux reinstalls every month. Does it mean that we should ignore the wire threat? No, but it does mean that we are foolish to let any protection of the wire threat cause us any grief. Protecting against any wire attack is fun, but no more than that - if it costs us a dime, it needs to be justified, and that is really hard given that we are thousands of times more likely to see a compromise on the node. If we spend 10c protecting against the wire attack, should we then spend $1,300 spending against the node attack? The situation is so ludicrously unbalanced, that if one really wanted to be serious about this issue, instead of dismissing certs out of hand (which would be the engineering approach c.f., SSH), one would run ADH across the net and wait to see what happened. Or, spit credit cards in open HTTP, and check how many were tried by credit card snafflers. You might be waiting a long time :-) But, that would be a serious way for credit card companies to measure whether they care one iota about certs or even crypto at all. > Similarly, the statement is true at large (many systems are compromised), > but not necessarily true in the small (I'm fairly confident that my SSL > endpoints are not compromised). This means that the threat model is valid > for individuals who take care to make sure that they comply with its > assumptions, even if it may be less valid for the Internet at large. If the threat model is valid for individuals who happen to understand what all this means, then by all means they should use the resultant security model. I don't think that anyone is saying that people can't use SSL in its current recommended form. JUst that more people would use SSL if software didn't push them in the direction of using overly fraught security levels. > And it's true that we define the threat model to be as large as the problem > we know how to solve: we protect against the things we know how to protect > against, and don't address problems at this level that we don't know how to > protect against at this level. (See my first reply to Erik, where I quoted two sections, earlier today.) We protect against things which are cost-effective to protect against. That is, we use risk analysis to work out the costs v. the benefits. We know how to protect against an awful lot. We simply don't, unless the cost is less than the benefit, in general. And, this is the point: SSL protected against the MITM because it could. Not because it was present as a threat, and not because it was cost- effective. It was infamously and deplorably weak security logic; what it should do is protect against things that are a threat, and for a cost that matches the threat. > So, I disagree: I don't think that the SSL model is wrong: it's the right > model for the component of the full problem it looks to address. And I > don't think that the Internet threat model has failed to address the > problem of host compromise: the fact is that these host compromises > resulted, in part, from the failure of operating systems and other software > to adequately protect against threats described in the Internet threat > model: namely, that data coming in over the network cannot be trusted. > > That doesn't change the fact that we should worry about the risk in > practice that those assumptions of endpoint security will not hold. It's about relative risks - I'm not saying that SSL should protect the node. What I'm saying is that it is ludicrous to worry overly much the risk that SSL deals with - the ITM, supposedly - in most practical environments, because that's not where the trouble lies. Another Analogy: Soldiers don't carry umbrellas into battle. But it does rain! The reasoning is simple - unless the umbrella is *free* it's ludicrous to worry about water when someone is shooting bullets at you. We do a risk-analysis on the umbrella, and we dis
Re: WYTM?
Eric Rescorla wrote: > > Ian Grigg <[EMAIL PROTECTED]> writes: > > I'm sorry, but, yes, I do find great difficulty > > in not dismissing it. Indeed being other than > > dismissive about it! > > > > Cryptography is a special product, it may > > appear to be working, but that isn't really > > good enough. Coincidence would lead us to > > believe that clear text or ROT13 were good > > enough, in the absence of any attackers. > > > > For this reason, we have a process. If the > > process is not followed, then coincidence > > doesn't help to save our bacon. > Disagree. Once again, SSL meets the consensus threat > model. It was designed that way partly unconsciously, > partly due to inertia, and partly due to bullying by > people who did have the consensus threat model in mind. (If you mean that the ITM is consenus, I grant you that two less successful protocols follow it - S/MIME and IPSec (partly) but I don't think that makes it consensus. I know there are a lot of people who don't think in any other terms than this model, and that is the issue! There are also a lot of people who think in terms completely opposed to ITM. So to say that ITM is consensus is something that is going to have to be established. If that's not what you mean, can you please define?) > That's not the design process I would have liked, > but it's silly to say that a protocol that matches > the threat model is somehow automatically the wrong > thing just because the designers weren't as conscious > as one would have liked. I'm not sure I ever said that the protocol doesn't match the threat model - did I? What I should have said and hoped to say was that the protocol doesn't match the application. I don't think I said "automatically," either. I did hold out hope in that rant of mine that the designers could have accidentally got it right. But, they didn't. Now, SSL, by itself, within the bounds of the ITM is actually probably pretty good. By all reports, if you want ITM, then SSL is your best choice. But, we have to be very careful to understand that any protocol has a given set of characteristics, and its applicability to an application is an uncertain thing; hence the process of the threat model and the security model. In SSL's case, one needs to say "use SSL, but only if your threat model is close to ITM." Or similar. Hence the title of this rant. The error of the past has been that too many people have said something like "Use SSL, because we already got it right." Which, unfortunately, skips the whole issue of what threat model one is dealing with. Just like happened with secure browsing. In this case, the ITM was a) agreed upon after the fact to fill in the hole, and b) not the right one for the application. > > > And on the client side the user can, of course, click "ok" to the "do > > > you want to accept this cert" dialog. Really, Ian, I don't understand > > > what it is you want to do. Is all you're asking for to have that > > > dialog worded differently? > > > > > > There should be no dialogue at all. Going from > > HTTP to HTTPS/self signed is a mammoth increase > > in security. Why does the browser say it is > > less/not secure? > Because it's giving you a chance to accept the certificate, > and letting you know in case you expected a real cert that > you're not getting one. My interpretation - which you won't like - is that it is telling me that this certificate is bad, and asking whether me if I am sure I want to do this. A popup is symonymous with bad news. It shouldn't be used for good news. As a general theme, that is, although this is the reason I cited that paper: others have done work on this and they are a long way ahead in their thinking, far beyond me. > > > It's not THAT different from what > > > SSH pops up. > > > > > > (Actually, I'm not sure what SSH pops up, it's > > never popped up anything to me? Are you talking > > about a windows version?) > SSH in terminal mode says: > > "The authenticity of host 'hacker.stanford.edu (171.64.78.90)' can't be established. > RSA key fingerprint is d3:a8:90:6a:e8:ef:fa:43:18:47:4c:02:ab:06:04:7f. > Are you sure you want to continue connecting (yes/no)? " > > I actually find the Firebird popup vastly more understandable > and helpful. I'm not sure I can make much of your point, as I've never heard of nor seen a Firebird? iang - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Ian Grigg <[EMAIL PROTECTED]> writes: > I'm sorry, but, yes, I do find great difficulty > in not dismissing it. Indeed being other than > dismissive about it! > > Cryptography is a special product, it may > appear to be working, but that isn't really > good enough. Coincidence would lead us to > believe that clear text or ROT13 were good > enough, in the absence of any attackers. > > For this reason, we have a process. If the > process is not followed, then coincidence > doesn't help to save our bacon. Disagree. Once again, SSL meets the consensus threat model. It was designed that way partly unconsciously, partly due to inertia, and partly due to bullying by people who did have the consensus threat model in mind. That's not the design process I would have liked, but it's silly to say that a protocol that matches the threat model is somehow automatically the wrong thing just because the designers weren't as conscious as one would have liked. > (No, it's a double-lock-in, or maybe more. It's > a complex interrelated scenario.) > > Here's specifically what the server does: When > it is installed, it doesn't also install and > start up the SSL server. You know that page > that has the feather on? It should also start > up on the SSL side as well, perhaps with a > different colour. > > Specifically, when you install the server, it > should create a self-signed certificate and use > it. Straight away. No questions asked. I would hardly characterize "Failure to do something Ian wants done automatically" as "lock-in". It's not like it takes a genius to type "make cert". You'd get a lot less argument from me if you'd tone down the hyperbole a bit. > > And on the client side the user can, of course, click "ok" to the "do > > you want to accept this cert" dialog. Really, Ian, I don't understand > > what it is you want to do. Is all you're asking for to have that > > dialog worded differently? > > > There should be no dialogue at all. Going from > HTTP to HTTPS/self signed is a mammoth increase > in security. Why does the browser say it is > less/not secure? Because it's giving you a chance to accept the certificate, and letting you know in case you expected a real cert that you're not getting one. > > It's not THAT different from what > > SSH pops up. > > > (Actually, I'm not sure what SSH pops up, it's > never popped up anything to me? Are you talking > about a windows version?) SSH in terminal mode says: "The authenticity of host 'hacker.stanford.edu (171.64.78.90)' can't be established. RSA key fingerprint is d3:a8:90:6a:e8:ef:fa:43:18:47:4c:02:ab:06:04:7f. Are you sure you want to continue connecting (yes/no)? " I actually find the Firebird popup vastly more understandable and helpful. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Eric Rescorla wrote: > > Ian Grigg <[EMAIL PROTECTED]> writes: > > > It's really a mistake to think of SSL as being designed > > > with an explicit threat model. That just wasn't how the > > > designers at Netscape thought, as far as I can tell. > > > > > > Well, that's the sort of confirmation I'm looking > > for. From the documents and everything, it seems > > as though the threat model wasn't analysed, it was > > just picked out of a book somewhere. Or, as you > > say, even that is too kind, they simply didn't > > think that way. > > > > But, this is a very important point. It means that > > when we talk about secure browsing, it is wrong to > > defend it on the basis of the threat model. There > > was no threat model. What we have is an accident > > of the past. > > Maybe so, but it coincides relatively well with the > common Internet threat model, so I think you can't > just dismiss that out of hand as if it were pulled > out of the air. I'm sorry, but, yes, I do find great difficulty in not dismissing it. Indeed being other than dismissive about it! Cryptography is a special product, it may appear to be working, but that isn't really good enough. Coincidence would lead us to believe that clear text or ROT13 were good enough, in the absence of any attackers. For this reason, we have a process. If the process is not followed, then coincidence doesn't help to save our bacon. It has to follow, for it to be valuable. If it doesn't follow, to treat it as anything other than a mere coincidence to be dismissed out of hand is leading us on to make other errors. I think that Matt Blaze said it fairly well. There are some security practices that in the recent past are now considered appalling. It's time to be a little bit appalled, and to recognise SSL for what it is - a job that survived not on its cryptographic merits, but through market and structural conditions at the time. > > > Incidentally, Ian, I'd like to propose a counterargument > > > to your argument. It's true that most web traffic > > > could be encrypted if we had a more opportunistic key > > > exchange system. But if there isn't any substantial > > > sniffing (i.e. the wire is secure) then who cares? > > > > > > Exactly. Why do I care? Why do you care? > > > > It is mantra in the SSL community and in the > > browsing world that we do care. That's why > > the software is arranged in a a double lock- > > in, between the server and the browser, to > > force use of a CA cert. > > You keep talking about the server locking you in, but it doesn't. (No, it's a double-lock-in, or maybe more. It's a complex interrelated scenario.) Here's specifically what the server does: When it is installed, it doesn't also install and start up the SSL server. You know that page that has the feather on? It should also start up on the SSL side as well, perhaps with a different colour. Specifically, when you install the server, it should create a self-signed certificate and use it. Straight away. No questions asked. Then, it becomes an administrator issue to replace that with a custom signed one, if the admin guy cares. > The world is full of people who run SSL servers with self-signed > certs. Right. I'm looking to improve those numbers, my guess would be 10-fold is not unreasonable. > And on the client side the user can, of course, click "ok" to the "do > you want to accept this cert" dialog. Really, Ian, I don't understand > what it is you want to do. Is all you're asking for to have that > dialog worded differently? There should be no dialogue at all. Going from HTTP to HTTPS/self signed is a mammoth increase in security. Why does the browser say it is less/not secure? Further, the popups are a bad way to tell the user what the security level is. The user can't grok them and easily mucks up on any complex qeustions. There needs to be a security display on the secured area that is more prominent and also more graded (caching numbers) than the current binary lock symbol. There has been some research on this area, I think it was Sean Smith (Dartmouth College) that posted on this subject. Yes, here it is: From: Sean Smith <[EMAIL PROTECTED]> > Or, if we should bother to secure it, shouldn't > we mandate the security model as applying to the > browser as well? Exactly. That was the whole point of our Usenix paper last year E. Ye, S.W. Smith. ``Trusted Paths for Browsers.'' 11th Usenix Security Symposium. August 2002 http://www.cs.dartmouth.edu/~sws/papers/usenix02.pdf Oh, and: Advertisement: we also built this into Mozilla, for Linux and Windows. http://www.cs.dartmouth.edu/~pkilab/demos/countermeasures/ > It's not THAT different from what > SSH pops up. (Actually, I'm not sure what SSH pops up, it's never popped up anything to me? Are you talking about a windows version?) iang - The Cryptography Mailing List Unsubscr
Re: WYTM?
Ian Grigg <[EMAIL PROTECTED]> writes: > > It's really a mistake to think of SSL as being designed > > with an explicit threat model. That just wasn't how the > > designers at Netscape thought, as far as I can tell. > > > Well, that's the sort of confirmation I'm looking > for. From the documents and everything, it seems > as though the threat model wasn't analysed, it was > just picked out of a book somewhere. Or, as you > say, even that is too kind, they simply didn't > think that way. > > But, this is a very important point. It means that > when we talk about secure browsing, it is wrong to > defend it on the basis of the threat model. There > was no threat model. What we have is an accident > of the past. Maybe so, but it coincides relatively well with the common Internet threat model, so I think you can't just dismiss that out of hand as if it were pulled out of the air. > > Incidentally, Ian, I'd like to propose a counterargument > > to your argument. It's true that most web traffic > > could be encrypted if we had a more opportunistic key > > exchange system. But if there isn't any substantial > > sniffing (i.e. the wire is secure) then who cares? > > > Exactly. Why do I care? Why do you care? > > It is mantra in the SSL community and in the > browsing world that we do care. That's why > the software is arranged in a a double lock- > in, between the server and the browser, to > force use of a CA cert. You keep talking about the server locking you in, but it doesn't. The world is full of people who run SSL servers with self-signed certs. And on the client side the user can, of course, click "ok" to the "do you want to accept this cert" dialog. Really, Ian, I don't understand what it is you want to do. Is all you're asking for to have that dialog worded differently? It's not THAT different from what SSH pops up. -Ekr -- [Eric Rescorla [EMAIL PROTECTED] http://www.rtfm.com/ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Eric, thanks for your reply! My point is strictly limited to something approximating "there was no threat model for SSL / secure browsing." And, as you say, you don't really disagree with that 100% :-) With that in mind, I think we agree on this: > > [9] I'd love to hear the inside scoop, but all I > > have is Eric's book. Oh, and for the record, > > Eric wasn't anywhere near this game when it was > > all being cast out in concrete. He's just the > > historian on this one. Or, that's the way I > > understand it. > > Actually, I was there, though I was an outsider to the > process. Netscape was doing the design and not taking much > input. However, they did send copies to a few people and one > of them was my colleague Allan Schiffman, so I saw it. OK! > It's really a mistake to think of SSL as being designed > with an explicit threat model. That just wasn't how the > designers at Netscape thought, as far as I can tell. Well, that's the sort of confirmation I'm looking for. From the documents and everything, it seems as though the threat model wasn't analysed, it was just picked out of a book somewhere. Or, as you say, even that is too kind, they simply didn't think that way. But, this is a very important point. It means that when we talk about secure browsing, it is wrong to defend it on the basis of the threat model. There was no threat model. What we have is an accident of the past. Which is great. This means there is no real objection to building a real threat model. One more appropriate to the times, the people, the applications, the needs. And the today-threats. Not the bogeyman threats. > Incidentally, Ian, I'd like to propose a counterargument > to your argument. It's true that most web traffic > could be encrypted if we had a more opportunistic key > exchange system. But if there isn't any substantial > sniffing (i.e. the wire is secure) then who cares? Exactly. Why do I care? Why do you care? It is mantra in the SSL community and in the browsing world that we do care. That's why the software is arranged in a a double lock- in, between the server and the browser, to force use of a CA cert. So, if we don't care, why do we care? What is the reason for doing this? Why are we paying to use free software? What paycheck does Ben draw from all our money being spent on this "i don't care" thing called a cert? Some people say "because of the threat model." And that's what this thread is about: we agree that there is no threat model, in any proper sense. So this is a null and void answer. Other people say "to protect against MITM. But, as we've discussed at length, there is little or no real or measurable threat of MITM. Yet others say "to be sure we are talking to the merchant." Sorry, that's not a good answer either because in my email box today there are about 10 different attacks on the secure sites that I care about. And mostly, they don't care about ... certs. But they care enough to keep doing it. Why is that? Someone made a judgement call, 9 or so years ago, and we're still paying for that person caring on our behalf, erroneously. Let's not care anymore. Let's stop paying. I don't care who it was, even. I just want to stop paying for his person, caring for me. Let's start making our own security choices? Let crypto run free! iang - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
At 12:28 AM 10/13/2003, Ian Grigg wrote: Problem is, it's also wrong. The end systems are not secure, and the comms in the middle is actually remarkably safe. I think this is an interesting, insightful analysis, but I also think it's drawing a stronger contrast between the real world and the Internet threat model than is warranted. It's true that a large number of machines are compromised, but they were generally compromised by malicious communications that came over the network. If correctly implemented systems had protected these machines from untrustworthy Internet data, they wouldn't have been compromised. Similarly, the statement is true at large (many systems are compromised), but not necessarily true in the small (I'm fairly confident that my SSL endpoints are not compromised). This means that the threat model is valid for individuals who take care to make sure that they comply with its assumptions, even if it may be less valid for the Internet at large. And it's true that we define the threat model to be as large as the problem we know how to solve: we protect against the things we know how to protect against, and don't address problems at this level that we don't know how to protect against at this level. This is no more incorrect than my buying clothes which will protect me from rain, but failing to consider shopping for clothes which will do a good job of protecting me from a nuclear blast: we don't know how to make such clothes, so we don't bother thinking about that risk in that environment. Similarly, we have no idea how to design a networking protocol to protect us from the endpoints having already been compromised, so we don't worry about that part of the problem in that space. Perhaps we worry about it in another space (firewalls, better OS coding, TCPA, passing laws). So, I disagree: I don't think that the SSL model is wrong: it's the right model for the component of the full problem it looks to address. And I don't think that the Internet threat model has failed to address the problem of host compromise: the fact is that these host compromises resulted, in part, from the failure of operating systems and other software to adequately protect against threats described in the Internet threat model: namely, that data coming in over the network cannot be trusted. That doesn't change the fact that we should worry about the risk in practice that those assumptions of endpoint security will not hold. - Tim - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Minor errata: Eric Rescorla wrote: > I totally agree that the systems are > insecure (obligatory pitch for my "Internet is Too > Secure Already") http://www.rtfm.com/TooSecure.pdf, I found this link had moved to here; http://www.rtfm.com/TooSecure-usenix.pdf > which makes some of the same points you're making, > though not all. iang - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: WYTM?
Ian, you and I have discussed this before, so I'll just make a few comments. [EMAIL PROTECTED] (Ian Grigg) writes: > Problem is, it's also wrong. The end systems > are not secure, and the comms in the middle is > actually remarkably safe. > > (Whoa! Did he say that?) Yep, I surely did: the > systems are insecure, and, the wire is safe. As you know, I think it's more in the middle. As I've mentioned before, password sniffing was a real problem before SSH. I totally agree that the systems are insecure (obligatory pitch for my "Internet is Too Secure Already") http://www.rtfm.com/TooSecure.pdf, which makes some of the same points you're making, though not all. > And, it's wrong. There are, then, given these > stated assumptions, three questions: > >1. why was it chosen? I think it was chosen for two reasons: (1) It actually was once a viable threat model, especially for military and financial communications, where the end systems were secure. (2) It's a problem we know how to solve. I don't think that solving the problems one knows how to solve is always a bad thing, as long as they're real problems. What's not clear is how real they are. > Designers of Internet security > protocols typically share a more > or less common threat model. > > It's para three, section 1.2. And, it is of course, > famously not true [10]. > > SSH is the most outstanding example of not sharing > that threat model [11]. In fact, it's fair to say > that most Internet security protocols do not share > that threat model, unless they happen to have > followed in SSL's footsteps and also forgotten to > do their threat model analysis. This isn't strictly true. IPsec and S/MIME use the same threat model, for instance. And even SSH mostly adopts it, since there's actualy a fair amount of concern about active attack after the first leap of faith. One could, after all, just use encryption with no message integrity at all. > [9] I'd love to hear the inside scoop, but all I > have is Eric's book. Oh, and for the record, > Eric wasn't anywhere near this game when it was > all being cast out in concrete. He's just the > historian on this one. Or, that's the way I > understand it. Actually, I was there, though I was an outsider to the process. Netscape was doing the design and not taking much input. However, they did send copies to a few people and one of them was my colleague Allan Schiffman, so I saw it. It's really a mistake to think of SSL as being designed with an explicit threat model. That just wasn't how the designers at Netscape thought, as far as I can tell. Incidentally, Ian, I'd like to propose a counterargument to your argument. It's true that most web traffic could be encrypted if we had a more opportunistic key exchange system. But if there isn't any substantial sniffing (i.e. the wire is secure) then who cares? -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]