Stasi code cracked
Forwarded-by: Maurice Wessling [EMAIL PROTECTED] (in German) http://focus.de/G/GP/GPA/gpa.htm?snr=64119streamsnr=7 First paragraph, from http://babelfish.altavista.com/cgi-bin/translate? Gauck authority decodes Stasi file Data from approximately 15,000 agents of the earlier GDR foreign espionage were decoded. The two coded magnetic tapes covered approximately 60,000 operational processes with pseudonyms of former informers in the Federal Republic and in the GDR after a report of FOCUS.
Re: Ten Risks of PKI
Carl Ellison and Bruce Schneier write: Certificate verification does not use a secret key, only public keys. Therefore, there are no secrets to protect. However, it does use one or more "root" public keys. If the attacker can add his own public key to that list, then he can issue his own certificates, which will be treated exactly like the legitimate certificates. They can even match legitimate certificates in every other field except that they would contain a public key of the attacker instead of the correct one. While this is true, keep in mind that there is more to mounting a successful cryptographic attack than adding root keys and fake certificates. It is also necessary to intercept the messages which might have gone to the legitimate recipient, and possibly decrypt and re-encrypt them. All this implies an attacker who has at least temporary write access to the victim's computer, and long term read/write control over the communication channels he will use. This is a very powerful attack model. Virtually no cryptographic system, whether public key, secret key, or even a one time pad, could be secure against such an attacker. If the attacker can get in and modify the set of trusted certs, he can probably also modify the software that checks them. He can weaken the generator of session keys, or arrange to log messages and access them later. He has many ways of getting access to the data beyond adding certs. This is not a threat for which is reasonable to expect a cryptographic defense, and it is not an issue specifically related to PKIs. The lack of clear analysis of the threat model in the "risks" being discussed makes it hard to evaluate how seriously to take them. If the goal is simply to raise fear, uncertainty and doubt about using public key cryptography, the authors have succeeded. But if they want to enlighten potential users about concerns which are serious and specific to the use of PKIs, they do not make that distinction clear. In fact the entire thrust of the article is against a poorly defined bogeyman, the PKI. What is this beast which is being critiqued? All we really learn is that it is being sold by security companies in a slick "sales pitch" which purports to guarantee security. There is no technical description of what the PKI is and how its weaknesses are manifest. In fact, a PKI is fundamentally any system which allows you to determine whether a public key is suitable for a given purpose. It can be as simple as a personal collection of public keys you know are good for specific purposes, or as complex as a full set of certification hierarchies with cross certification and automated policy translation. Carl Ellison himself is the chief designer of the Simple PKI. Presumably he does not mean to say that he has misled potential users into taking on ten new risks by his work on this protocol. By using this generic term "PKI" the authors leave a great deal of confusion about which systems they are criticizing. Some of their "risks", such as the one quoted above, would apply to all of these PKIs, including SPKI. Others are more specific to current X.509 based hierarchical certification systems. Some don't address the PKI at all, but worry about things like user interfaces, criticisms that can be directed at virtually any form of security software. Rather than a hodgepodge of "risks" which pertain to many aspects of cryptographic systems beyond the public key infrastructure, it would be more useful to see a clear description of the kind of PKI the authors want to criticize, followed by a discussion of the issues which arise in the practical use of such systems. This would provide useful information to the potential purchaser or user of a PKI system. As presented, the article is likely to raise confusion and concern, but not to lead users to ask enlightened questions.
Ten Risks of PKI
Ten Risks of PKI: What You're not Being Told about Public Key Infrastructure By Carl Ellison and Bruce Schneier Computer security has been victim of the "year of the..." syndrome. First it was firewalls, then intrusion detection systems, then VPNs, and now certification authorities (CAs) and public-key infrastructure (PKI). "If you only buy X," the sales pitch goes, "then you will be secure." But reality is never that simple, and that is especially true with PKI. Certificates provide an attractive business model. They cost almost nothing to make, and if you can convince someone to buy a certificate each year for $5, that times the population of the Internet is a big yearly income. If you can convince someone to purchase a private CA and pay you afee for every certificate he issues, you're also in good shape. It's no wonder so many companies are trying to cash in on this potential market.With that much money at stake, it is also no wonder that almost all the literature and lobbying on the subject is produced by PKI vendors. And this literature leaves some pretty basic questions unanswered: What good are certificates anyway? Are they secure? For what? In this essay, we hope to explore some of those questions. Security is a chain; it's only as strong as the weakest link. The security of any CA-based system is based on many links and they're not all cryptographic. People are involved. Does the system aid those people, confuse them or just ignore them? Does it rely inappropriately on the honesty or thoroughness of people? Computer systems are involved. Are those systems secure? These all work together in an overall process. Is the process designed to maximize security or just profit? Each of these questions can indicate security risks that need to be addressed. Before we start: "Do we even need a PKI for e-commerce?" Open any article on PKI in the popular or technical press and you're likely to find the statement that a PKI is desperately needed for e-commerce to flourish. This statement is patently false. E-commerce is already flourishing, and there is no such PKI. Web sites are happy to take your order, whether or not you have a certificate. Still, as with many other false statements, there is a related true statement: commercial PKI desperately needs e-commerce in order to flourish. In other words, PKI startups need the claim of being essential to e- commerce in order to get investors. There are risks in believing this popular falsehood. The immediate risk is on the part of investors. The security risks are borne by anyone who decides to actually use the product of a commercial PKI. Risk #1: "Who do we trust, and for what?" There's a risk from an imprecise use of the word "trust." A CA is often defined as "trusted." In the cryptographic literature, this only means that it handles its own private keys well. This doesn't mean you can necessarily trust a certificate from that CA for a particular purpose: making a micropayment or signing a million-dollar purchase order. Who gave the CA the authority to grant such authorizations? Who made it trusted? A CA can do a superb job of writing a detailed Certificate Practice Statement, or CPS ó all the ones we've read disclaim all liability and any meaning to the certificate ó and then do a great job following that CPS, but that doesn't mean you can trust a certificate for your application. Many CAs sidestep the question of having no authority to delegate authorizations by issuing ID certificates. Anyone can assign names. We each do that all the time. This leaves the risk in the hands of the verifier of the certificate, if he uses an ID certificate as if it implied some kind of authorization. There are those who even try to induce a PKI customer to do just that. Their logic goes: (1) you have an ID certificate, (2) that gives you the keyholder's name, (3) that means you know who the keyholder is, (4) that's what you needed to know. Of course, that's not what you needed to know. In addition, the logical links from 1 to 2, 2 to 3 and 3 to 4 are individually flawed. [We leave finding those as an exercise for the reader.] Risk #2: "Who is using my key?" One of the biggest risks in any CA-based system is with your own private signing key. How do you protect it? You almost certainly don't own a secure computing system with physical access controls, TEMPEST shielding, "air wall" network security, and other protections; you store your private key on a conventional computer. There, it's subject to attack by viruses and other malicious programs. Even if your private key is safe on your computer, is your computer in a locked room, with video surveillance, so that you know no one but you ever uses it? If it's protected by a password, how hard is it to guess that password? If your key is stored on a smart card, how attack-resistant is
Ten Risks of PKI
[One more time, for the non-linefeed impaired. Musta been a great christmas party, that... :-)] Ten Risks of PKI: What You're not Being Told about Public Key Infrastructure By Carl Ellison and Bruce Schneier Computer security has been victim of the "year of the..." syndrome. First it was firewalls, then intrusion detection systems, then VPNs, and now certification authorities (CAs) and public-key infrastructure (PKI). "If you only buy X," the sales pitch goes, "then you will be secure." But reality is never that simple, and that is especially true with PKI. Certificates provide an attractive business model. They cost almost nothing to make, and if you can convince someone to buy a certificate each year for $5, that times the population of the Internet is a big yearly income. If you can convince someone to purchase a private CA and pay you afee for every certificate he issues, you're also in good shape. It's no wonder so many companies are trying to cash in on this potential market.With that much money at stake, it is also no wonder that almost all the literature and lobbying on the subject is produced by PKI vendors. And this literature leaves some pretty basic questions unanswered: What good are certificates anyway? Are they secure? For what? In this essay, we hope to explore some of those questions. Security is a chain; it's only as strong as the weakest link. The security of any CA-based system is based on many links and they're not all cryptographic. People are involved. Does the system aid those people, confuse them or just ignore them? Does it rely inappropriately on the honesty or thoroughness of people? Computer systems are involved. Are those systems secure? These all work together in an overall process. Is the process designed to maximize security or just profit? Each of these questions can indicate security risks that need to be addressed. Before we start: "Do we even need a PKI for e-commerce?" Open any article on PKI in the popular or technical press and you're likely to find the statement that a PKI is desperately needed for e-commerce to flourish. This statement is patently false. E-commerce is already flourishing, and there is no such PKI. Web sites are happy to take your order, whether or not you have a certificate. Still, as with many other false statements, there is a related true statement: commercial PKI desperately needs e-commerce in order to flourish. In other words, PKI startups need the claim of being essential to e- commerce in order to get investors. There are risks in believing this popular falsehood. The immediate risk is on the part of investors. The security risks are borne by anyone who decides to actually use the product of a commercial PKI. Risk #1: "Who do we trust, and for what?" There's a risk from an imprecise use of the word "trust." A CA is often defined as "trusted." In the cryptographic literature, this only means that it handles its own private keys well. This doesn't mean you can necessarily trust a certificate from that CA for a particular purpose: making a micropayment or signing a million-dollar purchase order. Who gave the CA the authority to grant such authorizations? Who made it trusted? A CA can do a superb job of writing a detailed Certificate Practice Statement, or CPS ó all the ones we've read disclaim all liability and any meaning to the certificate ó and then do a great job following that CPS, but that doesn't mean you can trust a certificate for your application. Many CAs sidestep the question of having no authority to delegate authorizations by issuing ID certificates. Anyone can assign names. We each do that all the time. This leaves the risk in the hands of the verifier of the certificate, if he uses an ID certificate as if it implied some kind of authorization. There are those who even try to induce a PKI customer to do just that. Their logic goes: (1) you have an ID certificate, (2) that gives you the keyholder's name, (3) that means you know who the keyholder is, (4) that's what you needed to know. Of course, that's not what you needed to know. In addition, the logical links from 1 to 2, 2 to 3 and 3 to 4 are individually flawed. [We leave finding those as an exercise for the reader.] Risk #2: "Who is using my key?" One of the biggest risks in any CA-based system is with your own private signing key. How do you protect it? You almost certainly don't own a secure computing system with physical access controls, TEMPEST shielding, "air wall" network security, and other protections; you store your private key on a conventional computer. There, it's subject to attack by viruses and other malicious programs. Even if your private key is safe on your computer, is your computer in a locked room, with video surveillance, so that you know no one but you ever uses it? If it's protected by a password, how hard is it to guess that password? If your key is stored on a smart card, how attack-resistant is the card? [Most are very weak.] If
Illegal NSA spying? It won't be the first time -- a look at history
http://www.wired.com/news/politics/0,1283,33026,00.html Spies Left Out in the Cold by Declan McCullagh ([EMAIL PROTECTED]) 3:00 a.m. 13.Dec.1999 PST It's enough to spook any spy. Congress plans to hold hearings next year that will, for the first time in a quarter century, investigate whether the National Security Agency is too zealous for our own good. Much has changed since those hearings in 1975. Instead of being a place so secretive that the Department of Justice once abandoned a key prosecution rather than reveal the National Security Agency's existence in court, "the Fort" has become enmeshed in popular culture. Techno-thrillers like Enemy of the State, Mercury Rising, Sneakers, and even cut-rate TV series like UPN's 7 Days regularly depict NSA officials -- to their chagrin -- as eavesdrop-happy Nixonites. But one thing has remained the same. The agency is barred from spying inside the United States and is supposed to snoop only on international communications. Through a system reportedly named Echelon, it distributes reports on its findings to the US government and its foreign allies. Do those findings include intercepted email messages and faxes sent by Americans to Americans? Maybe, and that's what's causing all the fuss. News articles on Echelon have captured the zeitgeist of the moment, spurred along by PR stunts like "Jam Echelon" day. Newsweek reported this week that the NSA is going to "help the FBI track terrorists and criminals in the United States." (The agency denied it.) A 6 December New Yorker article also wondered about the future of Fort George Meade. That future could look a lot like the past: congressional action that, in the end, doesn't amount to much. For this article, Wired News reviewed the original documents and transcripts from the Church committee hearings that took place in the Watergate-emboldened Senate in 1975. The Select Committee to Study Governmental Operations with Respect to Intelligence Activities published its final report in April 1976. It wasn't an easy process. NSA defenders tried their best to kick the public out of the hearing room and hold the sessions behind closed doors. "I believe the release of communications intelligence information can cause harm to the national security," complained Senator Barry Goldwater, a Republican who voted against disclosing information on illicit NSA surveillance procedures and refused to sign the final report. "The public's right to know must be responsibly weighed against the impact of release on the public's right to be secure Disclosures could severely cripple or even destroy the vital capabilities of this indispensible safeguard to our nation's security," said another senator. But Democratic Senator Frank Church and his allies on the committee prevailed, and disclosed enough information to give any Americans the privacy jitters. Among the findings: Shamrock: In 1945, the NSA's predecessor coerced Western Union, RCA, and ITT Communications to turn over telegraph traffic to the Feds. The project was codenamed Shamrock. "Cooperation may be expected for the complete intercept coverage of this material," an internal agency memo said.
Re: Ten Risks of PKI
BPM Mixmaster Remailer wrote: By using this generic term "PKI" the authors leave a great deal of confusion about which systems they are criticizing. Some of their "risks", such as the one quoted above, would apply to all of these PKIs, including SPKI. Others are more specific to current X.509 based hierarchical certification systems. Some don't address the PKI at all, but worry about things like user interfaces, criticisms that can be directed at virtually any form of security software. Slightly tangentially, but worth observing, I think: current X.509 based PKI is only nominally hierarchical. That is, X.509 would _like_ the DN to be allocated hierarchically, but in practice this does not happen. Each CA has its own namespace, there is no-one above CAs in the hierarchy, and only one layer below (the entity for whom the CA provides a certificate). This is pretty flat for a "heirarchy" by anyone's reckoning. SPKI's main beefs with X.509 (AFAIK) are that: a) X.509 tends to want to be identity-based, which is a poorly defined concept at best (SPKI leans towards roles or capabilities) b) X.509 is based on a lot of difficult-to-get-right stuff that just gets in the way of the real meat: signing public keys and attaching some attributes to them. The fact that every X.509 package of any breadth is peppered with exceptions to cater for every other package's cockups is definitely evidence is SPKI's favour, IMO. The downside of SPKI, of course, is the usual one that seems to dog good ideas: no-one uses it. Cheers, Ben. -- SECURE HOSTING AT THE BUNKER! http://www.thebunker.net/hosting.htm http://www.apache-ssl.org/ben.html "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
Re: Ten Risks of PKI
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 At 06:40 PM 12/13/99 -, lcs Mixmaster Remailer wrote: However this is just the first step in an effective compromise. Now you need to get him to use a bogus certificate when he thinks he is using a good one. He tries to connect to a secure site, and you need to step in and play man in the middle. You must hijack his connection to, say, www.amazon.com, and direct it to your own site. Then you can offer your bogus cert for www.amazon.com and get it accepted. The Bloomberg attack didn't require connection hijacking. All that attacker did was post a newsgroup message with a URL in it. If you're depending on that little lock in the corner of the browser window to mean you're connected to the page you seem to be connected to, and the "seem to be" is derived only from the page contents, you're in trouble. That's more what we were talking about than connection hijacking -- although if you want to go to that trouble, feel free. :) This shows up more clearly with e-mail. Here again, you don't have to hijack a connection if the attacker initiates the exchange (sends the first message) and the victim uses the "reply to" button in his mailer. [E.g., the attacker asks for a copy of the victim's latest draft -- and the victim sends it.] -BEGIN PGP SIGNATURE- Version: PGP 6.5.2fc7 iQA/AwUBOFVYWJSWoQShp/waEQIz0wCgkqP8a5D7lPlWcG3bo7agUMFoj80An07r 4mVt/ebbleR6Pqhp1KIw2Vuo =jFYN -END PGP SIGNATURE- +--+ |Carl M. Ellison [EMAIL PROTECTED] http://www.pobox.com/~cme | |PGP: 08FF BA05 599B 49D2 23C6 6FFD 36BA D342 | +--Officer, officer, arrest that man. He's whistling a dirty song.-+
Re: US law makes it a crime to disclose crypto-secrets
Documents were being stamped Confidential, Secret, and Top Secret under the regulations of various US government departments long before the string of Executive Orders. (The first was 10290, "Prescribing Regulations Establishing Minimum Standards for the Classification, Transmission, and Handling, by Departments and Agencies of the Executive Branch, of Official Information which Requires Safeguarding in the Interest of the Security of the United States," issued by Harry Truman in 1951. The current one is 12958.) The Executive Orders standardized the rules across all departments. I believe it the executive branch takes the position that documents marked in this way are covered by the Espionage Laws, i.e. sections 793 and 794, and that position probably goes back to 1917 when the laws were first passed. From my reading of the Supreme Court decisions in the Pentagon Papers, the courts have the same presumption. I suppose an attorney defending someone charged under these laws might attack that link, claiming the specific documents in question in fact had no bearing on the National Defense as defined in 793 and 794. But unless the misclassification was pretty blatant, it would be a tough sell. Has anyone ever done a history of the security classification system? Arnold Reinhold At 1:13 PM -0500 12/12/99, Donald E. Eastlake 3rd wrote: The law you cite is unaffected by whether the information is classified. Except for a few special laws, such as the Atomic Energy Act which makes certain information "born classified" no matter who comes up with it, and the previously cited Crypto info law which was, perhaps, an attempt to make comsec and comint information "born classified", as far as I known the entire classifcation system rests on a continuing series of Presidential Executive Orders and it is not clear to me how much they effect someone who is not a government employee and who has not entered into an agreement regarding such material. Donald From: "Arnold G. Reinhold" [EMAIL PROTECTED] X-Sender: [EMAIL PROTECTED] (Unverified) Message-Id: v04210101b4787bb8c0e6@[24.218.56.92] In-Reply-To: [EMAIL PROTECTED] References: [EMAIL PROTECTED] Date: Sun, 12 Dec 1999 08:59:54 -0500 To: Declan McCullagh [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED] Content-Type: text/plain; charset="us-ascii" ; format="flowed" Sender: [EMAIL PROTECTED] It's not just crypto. The US Espionage laws prohibit the disclosure of classified information by anyone. See Title 18 Sec. 793(e): (e) Whoever having unauthorized possession of, access to, or control over any document, writing, code book, signal book, sketch, photograph, photographic negative, blueprint, plan, map, model, instrument, appliance, or note relating to the national defense, or information relating to the national defense which information the possessor has reason to believe could be used to the injury of the United States or to the advantage of any foreign nation, willfully communicates, delivers, transmits or causes to be communicated, delivered, or transmitted, or attempts to communicate, deliver, transmit or cause to be communicated, delivered, or transmitted the same to any person not entitled to receive it, or willfully retains the same and fails to deliver it to the officer or employee of the United States entitled to receive it; or ... As I recall, classified documents are required to carry a legend on each page saying something like "This document contains information affecting the national defense within the meaning of the espionage laws, Title 18 793 and 794, the transmission or revelation of which to unauthorized persons is prohibited by law." In any case the restrictions on classified material go far beyond a voluntary agreement by those given access to keep the information secret. People who have authorized access take on the additional burden that negligent handling of classified information is a crime (793 (f)). I presume this is the basis for prosecuting Dr.Lee of Los Alamos. It's true that Section 798 specifically includes the word "publishes" while 793(e) does not. That distinction, along with legislative history, was relied on by some of the Justices (e.g. Justice Douglas) in the Pentagon Papers case. Still I don't think the question of whether publishing classified material is criminal was clearly settled. The issue then was prior restraint, not after-the-fact prosecution. Some of the majority Justices indicated they might even approve prior restraint if the Government had shown an immediate danger comparable to publishing the departure time of transport ships in war time. Since the Pentagon Papers case, I don't think the Government has dared to prosecute the press for publishing classified information. Printing proof that NSA has broken the Fredonian diplomatic code might tempt them to try, however. By the way,
Re: Debit card fraud in Canada
At 10:49 AM 12/13/99 -0500, Steven M. Bellovin wrote: true for credit cards? If so, a simple visual recorder -- already used by other thieves -- might suffice, and all the tamper-resistance in the world won't help. Crypto, in other words, doesn't protect you if the attack is on the crypto endpoint or on the cleartext. Wouldn't a thumbprint reader on the card (to authenticate the meat to the smartcard) be a tougher thing to shoulder surf? Does raise the cost over a PIN. Aren't there protocols where the exchange can't be replayed, but proof-of-knowledge is demonstrated? Or would these exchanges require on-line connectivity, thereby defeating the utility of smartcards some?
Re: Debit card fraud in Canada
On Mon, Dec 13, 1999 at 12:12:42PM -0800, David Honig wrote: Wouldn't a thumbprint reader on the card (to authenticate the meat to the smartcard) be a tougher thing to shoulder surf? Does raise the cost over a PIN. I'm not sure if biometrics would help with the sort of attack this appears to be. It sounds like the modified card readers/number pads record everything. The information on the magnetic strip, the PIN entered on the keypad, possibly everything going over the wire too (these devices dial the bank to authenticate). Any biometric information could also be recorded and replayed. I guess it would be more difficult because you couldn't use the information at a regular ATM the way you can with card+PIN; you'd need a compromised machine to feed the information to. Aren't there protocols where the exchange can't be replayed, but proof-of-knowledge is demonstrated? That would require a smart card, or a cryptographicly strong operation that the user could do in their head (which would probably get filed under "too hard to use"). Anything depending on a regular magnetic card and PIN would probably be vulnerable to whatever attack we're seeing here. Or would these exchanges require on-line connectivity, thereby defeating the utility of smartcards some? I'm not sure if I'd trust a smartcard-based system that didn't require on-line connectivity. From what little I've seen such things usually (always?) depend on the tamper resistance of the device for their security (eg. M*nd*x). The current debit card system requires on-line connectivity to verify the card+PIN and transfer the funds. It's basicly the same as using an ATM machine. If you have a bank account and a card to access that account from an ATM machine, you can use it all over the place instead of cash. Some places even let you withdraw cash when making a transaction. Here in Canada it's about as widely used now at point-of-sale as credit cards are, maybe even more common, but you can't order stuff over the phone the way you can with credit cards.
Re: Debit card fraud in Canada
David Honig wrote: At 10:49 AM 12/13/99 -0500, Steven M. Bellovin wrote: true for credit cards? If so, a simple visual recorder -- already used by other thieves -- might suffice, and all the tamper-resistance in the world won't help. Crypto, in other words, doesn't protect you if the attack is on the crypto endpoint or on the cleartext. Wouldn't a thumbprint reader on the card (to authenticate the meat to the smartcard) be a tougher thing to shoulder surf? Does raise the cost over a PIN. Sure. But wouldn't you like to keep your thumbs? Cheers, Ben. -- SECURE HOSTING AT THE BUNKER! http://www.thebunker.net/hosting.htm http://www.apache-ssl.org/ben.html "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
Re: Debit card fraud in Canada
The NACHA pilot announced about a month ago specifies an AADS based transaction. The combined press release last week at BAI (something like cebit for the world retail banking industry) ... specifies AADS/X9.59 digital signing. The AADS strawman proposes an online paramerterized risk management infrastructure that can be software, hardware, bin-activated hardware, bio-sensor activated hardware, etc (i.e. integrity level of the compartment doing the digital signing). The issue isn't that the chip enables offline ... but that a chip with various characteristics can improve the integrity of online (non-face-to-face) transactions. misc. references. http://internetcouncil.nacha.org/ http://www.garlic.com/~lynn/ and specific ... http://www.garlic.com/~lynn/99.html#224 http://www.garlic.com/~lynn/aadsmore.htm#bioinfo1 http://www.garlic.com/~lynn/aadsmore.htm#bioinfo2 http://www.garlic.com/~lynn/aadsmore.htm#bioinfo3 David Honig [EMAIL PROTECTED] on 12/13/99 12:12:42 PM To: "Steven M. Bellovin" [EMAIL PROTECTED], Steve Reid [EMAIL PROTECTED] cc: [EMAIL PROTECTED] (bcc: Lynn Wheeler/CA/FDMS/FDC) Subject: Re: Debit card fraud in Canada At 10:49 AM 12/13/99 -0500, Steven M. Bellovin wrote: true for credit cards? If so, a simple visual recorder -- already used by other thieves -- might suffice, and all the tamper-resistance in the world won't help. Crypto, in other words, doesn't protect you if the attack is on the crypto endpoint or on the cleartext. Wouldn't a thumbprint reader on the card (to authenticate the meat to the smartcard) be a tougher thing to shoulder surf? Does raise the cost over a PIN. Aren't there protocols where the exchange can't be replayed, but proof-of-knowledge is demonstrated? Or would these exchanges require on-line connectivity, thereby defeating the utility of smartcards some?
Re: Ten Risks of PKI
Carl Ellison writes: The Bloomberg attack didn't require connection hijacking. All that attacker did was post a newsgroup message with a URL in it. This is presumably a reference to the incident described in http://news.cnet.com/news/0-1005-200-341267.html, where a PairGain employee apparently created a fake web page which resembled that of trusted financial news source Bloomberg, reporting an impending acquisition of PairGain. He then posted to Yahoo discussion groups a reference to his page's URL, using its IP address to disguise the actual point of origin and claiming it to be a genuine Bloomberg news story. The result was a 30% rise in PairGain's stock. This kind of attack is one of the things that PKIs are intended to address, but in this case no cryptography was used. Perhaps it would make good fodder for your upcoming companion article, "Ten Risks of NOT Using PKI". If you're depending on that little lock in the corner of the browser window to mean you're connected to the page you seem to be connected to, and the "seem to be" is derived only from the page contents, you're in trouble. That's more what we were talking about than connection hijacking -- although if you want to go to that trouble, feel free. :) Okay, but in the context of the risk you identified with PKIs, that is in fact what we are talking about: ways to get that little lock to appear when it shouldn't. They aren't as easy as the Bloomberg attack. This shows up more clearly with e-mail. Here again, you don't have to hijack a connection if the attacker initiates the exchange (sends the first message) and the victim uses the "reply to" button in his mailer. [E.g., the attacker asks for a copy of the victim's latest draft -- and the victim sends it.] Again, isn't this a case where a PKI helps rather than harms security? Getting a cert accepted with the identity of the person the victim thinks he is responding to will be more difficult than simply sending an unsigned message which claims to be from that person. Many of the issues you raised in your article are legitimate (although not necessarily specific to PKIs), but there seems to be a danger that you will just end up sowing confusion and doubt. The result will be that people will continue to use the old ways and fall into the traps you have described here. It's fair to criticize PKIs with an eye towards improving them, but your article seems more directed at questioning the value of cryptography itself.
Re: Debit card fraud in Canada
At 10:30 PM 12/13/99 +, Ben Laurie wrote: David Honig wrote: Sure. But wouldn't you like to keep your thumbs? Yes, and my eyeballs, etc. Mere discussion does not imply endorsement. A PIN doesn't help: a thug will drag you to the ATM and harm you if you give the wrong PIN. And probably some physicalhacker would figure out how to develop a mold from a print... If prints are you, and you are your prints, you would wear gloves in public, for fear of touching a sensor. Maybe Michael Jackson is a biometric authentication freak.
sci fi (was Re: Onhand, clapping? (was Re: NTK now, 1999-12-10))
At 10:15 AM 12/13/99 -0500, R. A. Hettinga wrote: Okay. For i=1 to 500 OnChalkboardAfterSchool("I will *only* copy URLs, and *not* type them from memory. Ever. Again.") End Heh. I attributed more subtlety to RAH than was intended; I thought he mean have the martial arts dude at http://www.onehand.com/ be the courier. Has anyone extrapolated from the fact that the more you carry a device with you, the less physically subvertible it is? Your home machine may be more robust against that attack than your office machine, e.g., if some friendly or yourself occupies the house most of the time. Office PC Home PC PDA dick-tracy watch waterproof dick tracy watch implant network of implants which monitor each other (so you'd have to pull all of them at once...) This could give an extra meaning to 'Bluetooth'. This ordering would be bogus if, e.g., your office is in Ft Meade or you frequently pass out behind saloons frequented by 'diplomats'. The body is not just a temple. Its a secure computing facility.
RE: Onhand, clapping? (was Re: NTK now, 1999-12-10)
I think this is what Bob was trying to reference: www.onhandpc.com Peter Trei -- From: R. A. Hettinga[SMTP:[EMAIL PROTECTED]] Sent: Monday, December 13, 1999 8:18 AM To: Digital Bearer Settlement List; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Onhand, clapping? (was Re: NTK now, 1999-12-10) At 12:56 am -0500 13/12/99, R. A. Hettinga wrote: Hey, guys, Would this puppy work as an acceptable bearer-certificate-carrying device? :-). http://www.onehand.com Heh... ...Which Fearghas tells me is some pseudomystical gobbledegook, which given the nature of the typo, could have been, um, worse. So, I'd like to sell a vowel, Vanna. This is the URL, I was talking: http://www.onhand.com Cheers, RAH - R. A. Hettinga mailto: [EMAIL PROTECTED] The Internet Bearer Underwriting Corporation http://www.ibuc.com/ 44 Farquhar Street, Boston, MA 02131 USA "... however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'