Details of the backdoor-padlock
Hi, made two pictures of the padlock with the backdoor: http://www.danisch.de/tmp/pict0951x.jpg shows the TSA keywhole: Just a very simple standard key cylinder, pretty easy to produce a general key from any lock. But that's waste of time. The lock suffers from the same weakness almost all locks of this kind do: You don't need any key or code to open them: See http://www.danisch.de/tmp/pict0954x.jpg The 'secret' code is still 000. When you turn the wheels for exactly 180 degree (thus the 5 is up on the rightmost wheel), you can see that chamfer of the axis on the left side of the rightmost wheel. It is visible, but must point down to open. Turn the wheels until you see this, and then turn them another 180 degrees, and: "Open Sesame!" So no need to bother with a TSA key. Open it directly. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: padlocks with backdoors - TSA approved
On Mon, Feb 26, 2007 at 10:36:22PM -0600, Taral wrote: > > I'm just waiting for someone with access to photograph said keys and > post it all over the internet. It does not need access to the keys. Do you know that car Volkswagen Golf? As far as I know also sold in the USA. In the eighties there was a problem: Many of the had been stolen without visible force. No broken window, no broken ignition lock. They finally found the method: These Golfs had plastic fuel tank caps, which could be easily broken off by hand. Just grab it, tear it away with force, and you have it. The tank cap had a lock inside. All you needed to do is to cut the plastic lock open and to copy the tumbler lengths to a blank key. Then you have a working key. You could do the same and just open some of these locks, one per key number. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: padlocks with backdoors - TSA approved
On Tue, Feb 27, 2007 at 01:09:00AM -0500, David Chessler wrote: > > This is why I don't bother with padlocks until I get to the hotel > room. It is a good idea to slow down the petty thief, but a "twist > tie" from a plastic bag will work. I use the nylon straps used to > hold cable bunches in place. I use many different colors, so it is > most unlikely that a petty thief would have one handy (black or white > are very common. Same what I do, especially because opening luggage in absence of the owner is rather unusual outside the USA. Sometimes I also "seal" the case with any unusual sticker I got somewhere for free or a paper sticker. The method with the cable binder became difficult since it is forbidden to have a nail scissors in the bord luggage. Sometimes not that easy to open it without damaging luggage without a tool. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: padlocks with backdoors - TSA approved
Hi Allen, On Mon, Feb 26, 2007 at 09:23:30PM -0800, Allen wrote: > Hi Hadmut, > > combination lock brands in the $30 to $45 USD range where you can > set the combination to whatever you want. Guess what? They all > seemed to use the same key to enable setting the combination. Why make it that difficult and complicated? You can easily and immediately open most combination locks with vertical wheels on suitcases (and probably those at padlocks). All you need is a flashlight. The wheels are usually a little bit loose. Just shift it to the left or to the right with your finger tip and use the flashlight to peep into the gap. You will spot the axis of the wheel. Now turn the wheel until you see the chamfer pointing directly to you. Proceed with all wheels. If the lock doesn't open, turn all wheel by 180 degree (to digit n+5 mod 10). Some locks need the chamfer up, some need it down to open. With a little practise and experience it is almost as fast as if you knew the combination code. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: padlocks with backdoors - TSA approved
On Mon, Feb 26, 2007 at 10:36:22PM -0600, Taral wrote: > > I'm just waiting for someone with access to photograph said keys and > post it all over the internet. There's nothing spectacular about it. That's the one I have bought: http://www.pac-safe.com/www/index.php?_room=3&_action=detail&id=72 That's another one: http://www.eaglecreek.com/accessories/security_id/TSA-SearchAlert-Lock-41027/ The TSA keyhole is always on the other side such that you don't see them. I am currently in a hurry, but I'll make a picture today and post ist. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
padlocks with backdoors - TSA approved
Hi, has this been mentioned here before? I just had my crypto mightmare experience. I was in a (german!) outdoor shop to complete my equipment for my next trip, when I came to the rack with luggage padlocks (used to lock the zippers). While the german brand locks were as usual, all the US brand locks had a sticker "Can be opened and re-locked by US luggage inspectors". Each of these (three digit code) locks had a small keyhole for the master key to open. Obviously there are different key types (different size, shape, brand) as the locks had numbers like "TSA005" tell the officer which key to use to open that lock. Never seen anything in real world which is such a precise analogon of a crypto backdoor for governmental access. Ironically, they advertise it as a big advantage and important feature, since it allows to arrive with the lock intact and in place instead of cut off. This is the point where I decided to have nightmares from now on. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: RSA SecurID SID800 Token vulnerable by design
On Fri, Sep 08, 2006 at 11:31:28AM -0700, Lance James wrote: > SecurID should not be the only concept for dependence. Yeah, however, it is a smart device which provides a reasonable level of security in a very simple and almost foolproof way (I know a case where the users complained that it did not work. They had to be told not to type in the serial number engraved at the backside, but the number displayed on the LCD...). It's a pity to see it weakened without need to. regard Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: RSA SecurID SID800 Token vulnerable by design
Hi Lance, On Fri, Sep 08, 2006 at 10:26:45AM -0700, Lance James wrote: > > Another problem from what I see with Malware that steals data is the > formgrabbing and "on event" logging of data. Malware can detect if > SecureID is being used based on targeted events, example: Say HSBC > (Hypothetical example, not targeting HSBC) has two-factor logins in > place, the problem with this is that it is vulnerable to session riding > and trojan-in-the-middle attacks anyway, because the minute the user > logs in, the malware could launder money out (unless transaction auth is > in place, which in most cases it's not), or they could pharm the user > with a fake website that resolves as HSBC but they go in within the time > frame of that token being valid and have access. Either way, however you > cut it, SecureID/Two-Factor User auth is not protected against malware, > period. Partly agreed. These kinds of attacks I usually teach in my workshops. However, in all of these cases the attacker has to be online in the moment you are logging in and you experience any failure, e.g. can't login or something like that. But with the SID800 malware could silently sit in the background and pass token codes to the attacker even if you do not login at this moment. E.g. it could wait until you have logged in (or out) and grap the next token code. Furthermore, the attack you described presumes that the attacker knows where you want to login. But when you could use the current token code as an indicator for searching login data in the input stream, then you can find new places to login, e.g. your company VPN access point. While the attack you describe is more important for banking, the USB attack is more against company logins. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RSA SecurID SID800 Token vulnerable by design
Hi, I recently tested an RSA SecurID SID800 Token http://www.rsasecurity.com/products/securid/datasheets/SID800_DS_0205.pdf The token is bundled with some windows software designed to make user's life easier. Interestingly, this software provides a function which directly copies the current token code into the cut-and-paste buffer, when the token is plugged in into USB. This is weak by design. The security of these tokens is based on what RSA calls "two-factor user authentication": It takes both a secret (PIN) and the time-dependend Token-Code to authenticate. The security of the Token-Code depends on the assumption that the token is resistant against malware or intruders on the computer used for communication (web browser, VPN client,...). However, if the Token Code can be read over the USB bus, this assumption does not hold. A single attack on the PC where the token is plugged in would compromise both the PIN (e.g. with a keylogger) and the token itself (e.g. writing a daemon which continuously polls the token and forwards the token in real time to a remote attacker. Ironically this could make an attack even easier: If some malware simultaneously monitors the token and the keyboard, it is much easier to detect that the keystrokes are actually related to some login procedure: Whenever the 6-digit token code appears in the keyboard or cut-and-paste input stream, you can be pretty sure that in a sliding window of about the last 100-200 keystrokes both the PIN and the address of the server to login is contained. Makes it really easy to automatically detect secrets in the input stream. Thus, two different authentication methods are together weaker than each single one. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: PGP "master keys"
On Wed, Apr 26, 2006 at 10:41:12PM -0400, Steven M. Bellovin wrote: > > Ah -- corporate key escrow. An overt back door for Little Brother, rather > than a covert one for Big Brother You should check the list of recipient keys in PGP messages from time to time anyway. I recently found a bug in an MTU plugin: Once you had a PGP pubkey with an empty ID in your keyring, the plugin had always added this key to the recipient keys, although the owner was not listed as a recipient of the e-mail. As far as we debugged, the key had to be in 'trusted' state, but it worked. Once you managed to have your pubkey added to someone else's keyring with an additional empty user ID (what most users never realize) you could read any encrypted mail sent by that person. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: History and definition of the term 'principal'?
Hi, On Wed, Apr 26, 2006 at 03:18:40PM -0400, Sean W. Smith wrote: > I like the definition in Kaufman-Perlman-Speciner: > > "A completely generic term used by the security community to include > both people and computer systems. Coined because it is more > dignified than 'thingy' and because 'object' and 'entity' (which also > means thingy) were already overused." Many thanks for the hint. :-) Are there different editions of Kaufman-Perlman-Speciner ? My edition of 1995 has two entries for principal in the index: - Page 129: "A principal is anything or anyone participating in cryptographically protected communication." - Page 266: "each user and each resource that will be using Kerberos." Which edition is yours? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
History and definition of the term 'principal'?
Hi, is anyone aware of a general and precise definition of the term 'principal' (as a noun) in the context of security? I need to solve a dispute. Someone claims, that 'principal' is an established 'concept' introduced by Roger Needhams, but could not give any citation. Someone else confirms this and claims, that 'principal' is indeed a 'well-introduced' concept, but also can't cite any source or give any definition. I have read through Needhams papers (Needham-Schroeder-Protocol, BAN-Logic), but just saw that he used the term 'principal' without any definition, just as a normal word of plain language. Since I am not a native english speaker it is not a simple task to precisely understand whether the word is used as a special technical term or just as a word of common language. Unfortunately, Needham died some years ago, and I couldn't ask him anymore. I have asked his co-authors, and they said that they are not aware that he ever had invented or defined this term. Instead, the directed me to Jack B. Dennis, Earl C. Van Horn: Programming Semantics for Multiprogrammed Computations, Communications of the ACM, Vol. 9, No. 3, March 1966, pp 143-155, where the term was used for the first time in context of computers. Interestingly, they took that legal term to describe the one who is liable to pay the costs of computation jobs, which were expensive at this time (thus probably the term 'account'): "We generalize this notion by defining the term _principal_ to mean an individual or group of individuals to whom charges are made for the expenditure of system resources. In particular a principal is charged for resources consumed by computations running on his behalf." Then, Jerome H. Saltzer and Michael D. Schroeder used the term in "`The Protection of Information in Computer Systems"', October 1974, as an abstraction for accountability: "A principal is, by definition, the entity accountable for the activities of a virtual processor." This is, where I lost the historical track of the term. Needham and Schroeder used the term in their paper about the Needham-Schroeder-protocol, but without any definition or introducing it. Many books about security don't even mention the term. There are other books (e.g. Menezes, van Oorschot, Vanstone, Handbook of Applied Cryptography, or Ross Anderson, Security Engineering), which explain the term, but in most cases only in one simple sentence, without any precise definition. Nobody cites any source for the term, nobody makes further use of the term, and all those explanations I found differ heavily from each other, some are even contradictive. Some say a principal is someone who participates in a cryptographical protocol. Others say, it is a human, a computer, or a network device. Some say, a principal is someone who has a name and is known and introduced to a security system. At least one says it is a synonym for 'party', but gives three different definitions within one book. Wikipedia doesn't know the term in context of security. The only precise definition I found is in a law dictionary where it is defined as a legal term. Since nobody cites anything, everyone defines on his own taste, nobody actually makes use of it, I assume that this term does not have a precise meaning. Seems to be just a common word of the english language without any particular meaning or importance in network security. Still difficult for a non-native english speaker. Can anyone give me some hints? Maybe about how 'principal' is related to Roger Needham? Or whether there is a precise and general definition? Who, btw, would have the authority to generally define terms in security science? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
How security could benefit from high volume spam
How security could benefit from high volume spam The parliament of the European Union today has passed a law that electronical call detail records, such as phone numbers, e-mail addresses, web accesses of all 450 million EU citizens are to be recorded and stored for 6 to 24 months. So everyone will be subject of complete surveillance of telecommunication. No place to hide. The given reasons are the need to investigate and prosecute terrorism and severe crime. But there is no evidence that this law actually has this effect, and that it is worth to sacrifice democracy and civil rights. Our constitution protects the right to communicate confidentially, for all citizens, and especially for lawyers, journalists, priests, etc. So terrorists finally begin to succeed in destructing our european, modern, democratic, and free way of life and civil rights. It is ridiculous that the modern world has not been attacked by a large army, but by just about 30-40 people with knives and a few bombs. The attack is not the primary attack itself. The main attack is to provocate overextended counter measures. Technically spoken, a denial-of-civil-rights-attack. And the EU proved to be vulnerable to this kind of attack. A patch is not available yet. Another threat to privacy and civil rights is the intellectual property industry. We have seen Sony attacking and sabotaging private computers, revealing private data, taking secretly control over people's communication and working equipment. We have seen a mother of five been sued into bankruptcy in the USA just for listening to music. This is perverse. We currently see governments considering to outlaw open source software or any kind of data processing or communication device without a digital rights management. There are good reasons to assume, that the European Union's collection of all telecommunication details will be abused to allow the intellectual property industry to completely track every communication. Just having received any e-mail from someone who had illegally downloaded music could be enough to have your home searched, your computer confiscated, and find yourself sued or prosecuted. The art and science of communication security will have to realign and focus on new goals. When designing telecommunication protocols we have to take much more care about what communication could reveal about the communication parties and the contents. It is not enough to just put some kind of simple encryption on a message body. We need to protect against traffic analysis, in particular the one without democratic legitimation. What does that mean? When designing a protocol we should take more care than we did to describe its vulnerability for and resistance against traffic analysis. Not just whether the contents are encrypted, but what an eavesdropper can tell about the communicating parties. We need to incorporate techniques like oblivious transfer and traffic hiding. An important component of such protection methods is noise. Plenty of noise. Something to hide in, to cover, to overload recording of call details. We should think about and research how to produce noise. We already have some noise. Its called spam. Some of you might know that I am one of the early days fighters against spam. I tried to eliminate as much spam as possible. But now, there could be a positive aspect about spam, virus mails, and other mass mails. Maybe it could become an advantage to receive a million mails per day from any senders. Maybe that is what is needed to hide my personal e-mails. Maybe that's the answer I have to give when someone blames me to have received e-mail from the wrong person: "I have no idea what you are talking about. I received about 150,000 virus and spam e-mails that day from arbitrary addresses, and didn't read a single one of them. I have just deleted them." When designing measures against spam, we should take this into consideration. Maybe in near future the advantages of that noise produced by millions of bots will outweigh the disadvantages? Comments are welcome. Hadmut Danisch - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: HTTPS mutual authentication alpha release - please test
On Fri, Nov 04, 2005 at 09:16:16AM +, Nick Owen wrote: > > No, this is not it. It is this attack and similar: > > http://tinyurl.com/a3b89 > > The phishers are not using valid certificates, but users are so immune > to warnings about certificates that they don't pay attention to them. > It may be a DNS cache poison or the typical email; it could be any > mechanism to send the user to a fraudulent site. What is being provided > is a mechanism to route the users to the correct site by providing a way > to validate the certificate for them. Mmmh, I'd have two questions about this: - It seems that you are not defending against an attack, but trying to protect the user against his own ignorance. The user ignores the warning label, so you want to replace it with a bigger warning label. But the bigger warning label doesn't contain any news or more information, or any protection that the smaller label doesn't provide. It's just that the bigger warning label is bigger (or more red, or more alerting letters...), hoping to wake the user up? But user ignorance is not a new type of attack. If the user pays attention to the browser warnings, then I don't see what advantage WIKD should have. Inventing new protocols and complexity, and trusting an additional party without good reason and reasonable advantage is never a good idea in security. - The authorized owner must be able to replace the server certificate with a new one at any time, e.g. when the secret key has been lost or compromised. case 1: If it is not possible to update the hash stored at WIKID, how would the authorized owner ever be able to replace the compromised key with a new one? Wouldn't this force him into continuing in using the compromised key? case 2: If it is possible to update the hash stored at WIKID, and if the attacker was already able to register a bogus certificate at an official CA, why shouldn't he be able to update the certificate at WIKID as well? In what way is WIKID's certificate verification procedure more reliable than the one of the "trusted CAs" ? Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: solving the wrong problem
When I came to Washington DC last november, my portrait and fingerprints were taken for the first time. I was the last one in the queue and the immigration officer was a nice guy, so I asked him how this should protect against terrorists. As far as I read in the newspapers, the 911 attackers just came under their real identity with their own passports. He smiled and told me, that this is not about terrorism. It is about illegal immigrants. A complete criminal infrastructure has established. As soon as my passport is stolen or if I lose it, they will have someone who looks similar as me and tries to enter the US with my passport. The problem is that they do not modify or temper with the passport in any way. The officers do not have any chance to detect any flaw with the passport, since it is still an authentic one. Their problem is not detecting forged passports, their problem is whether the passport belongs to the person. That's why they are taking fingerprints and pictures. Once the owner of a passport entered the USA and is in the database, they can detect if someone else is trying to enter with the same passport. Detection of the fiber structure wouldn't help here. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Papers about "Algorithm hiding" ?
Hi, you most probably have heard about the court case where the presence of encryption software on a computer was viewed as evidence of criminal intent. http://www.lawlibrary.state.mn.us/archive/ctappub/0505/opa040381-0503.htm http://news.com.com/Minnesota+court+takes+dim+view+of+encryption/2100-1030_3-5718978.html Plenty of research has been done about information hiding. But this special court case requires "algorithm hiding" as a kind of response. Do you know where to look for papers about this subject? What about designing an algorithm good for encryption which someone can not prove to be an encryption algorithm? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Cryptanalytic attack on an RFID chip
On Sat, Jan 29, 2005 at 01:09:32PM -0500, Steven M. Bellovin wrote: > This chip is used in anti-theft > automobile immobilizers and in the ExxonMobil SpeedPass. If I recall correctly, there are two different electronic functions in key cars. One is the theft protection where the chip needs to authenticate when starting the engine (in Europe e.g. Ford introduced this some years ago, the keys had a red, and the car came with a fully red master key (yes, both a mechanical and cryptographical key) which allowed to teach the car to accept additional keys). The other function is the remote control to open the doors by pressing a button at the key. Does this attack compromise the theft protection only or the door opener as well? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Where to get a Jefferson Wheel ?
Dean, James wrote: >> The order of the wheels can't be changed. > So this encryption device doesn't use any key? Only the most trivial; you choose the row to transmit. From what I've seen on the web not even that: Unlike the original Jefferson wheel these toys are not intended to choose any row, but to use the row directly under the plaintext row as cipher text. Instead of the line indicator from Jefferson, they have a sliding bar with two windows for two subsequent rows. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Where to get a Jefferson Wheel ?
Dean, James wrote: The order of the wheels can't be changed. So this encryption device doesn't use any key? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Where to get a Jefferson Wheel ?
Hi, does anyone know where I can get a Jefferson Wheel or a replica? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: M-209 broken in WWII
Anish wrote: could you please translate atleast the abstract for the rest of us :-) http://www.heise.de/tp/deutsch/inhalt/co/18371/1.html Sure, some of the first paragraphs: As a german codebreaker in World War II Klaus Schmeh 23.9.2004 For the first time a witness reported, who was involved in breaking the US cipherdevice M-209 Even experts didn't know until some years ago that german deciphering specialists broke ciphers of the allied in the second world war. But several sources document, that the germans at that time succeeded to decipher the US cipher device M-209. Telepolis associate Klaus Schmeh, who is specialised on cryptology, has finally found a contemporary witness, who participated in the decryption of M-209 messages. One of the most fascinating episodes of technical history happend in World War II. At that time british experts on the manor Bletchley Park near to London broke the famous german cipher device Enigma under the strictest secrecy, where they used thousands of people and for that time top modern data processing devices. Until some years ago, the doctrine was, that the germans, in contrast to the british, underestimated the potential of the science of deciphering and couldn't read the radio messages of their enemies. It is known for just a few years, that this assessment is 'political correct' but wrong.For example, the former President of the Bundesamt fÃr Sicherheit in der Informationstechnik BSI (German Federal Office of Security in Information Technology) Dr. Otto Leiberich reported, that the germans broke the US cipherdevice M-209 in the WWII, what was absolutely not an easy untertaking. More documented successes in deciphering proof, that the german code breakers were even among the best of the world. The explanations of Otto Leiberich provided also an important source of information for the author of this article, when he wrote his recently published book "Die Welt der geheimen Zeichen - Die faszinierende Geschichte der VerschlÃsselung" (The world of secret signs - the fascinating history of encryption). An excerpt of this book, that was published on Telepolis, caused a little sensation: A 84 year old man from Frankfurt reported to the author and explained that he was involved in breaking the aforesaid US cipherdevice M-209. After there were only second-hand reports about german codebreakers in WWII, for the first time an eye witness appeared, who furthermore brought some completely new aspects to light. With this article the memories of this contemporary witness are published for the very first time. OK, these are the first few paragraphs. If you want to have more about this you should ask the publisher for a translation, because under german copyright law even the translation is a right of the author. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [anonsec] Re: potential new IETF WG on anonymous IPSec (fwd from [EMAIL PROTECTED]) (fwd from [EMAIL PROTECTED])
On Thu, Sep 16, 2004 at 12:41:41AM +0100, Ian Grigg wrote: > > It occurs to me that a number of these ideas could > be written up over time ... a wiki, anyone? I think > it is high past time to start documenting crypto > patterns. Wikis are not that good for discussions, and I do believe that this requires some discussion. I'd propose a separate mailing list for that. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: public-key: the wrong model for email?
On Wed, Sep 15, 2004 at 11:39:25AM -0700, Ed Gerck wrote: > > Yes, SSL and public-key encryption are and continue to be a success for web > servers. However, the security model for protecting email with public-key > cryptography seems to be backwards, technically and business wise. Exactly. It is easy to protect web sites with SSL, but it is difficult to protect e-mail against spam with PKC. Why? Because PKC works for this Alice&Bob communication scheme. If you connect to a web server, then what you want to know, or what authentication means is: "Are you really www.somedomain.com?" That's the Alice&Bob model. SSL is good for that. If I send you an encrypted e-mail, I do want that _you_ Ed Gerck, can read it only. That's still the Alice&Bob model. PGP and S/MIME are good for that. If you send me an e-mail with a signature, and there is any particular relation between you and me, where it is important for an attacker to pretend to be Ed Gerck and not just anyone, even that is still the Alice&Bob model. PGP and S/MIME still work. But that's not the way E-Mail works in common. E-Mail means: Anyone on this world is basically able to send me an e-mail. And that's not yet an attack, because that's what I want, that's why I put my e-mail address on my web page. This is not Alice&Bob anymore. This is Anyone&Bob. The sender of an e-mail does not need to pretend beeing a particular person or sender. Any identity of the 8 (10?) billion humans on earth will do it. What does it mean if the message has a digital signature? It most certainly means that the sender is a human from planet earth. You could tell the same without a signature. PKC is good as long as the communication model is a closed and relatively small user group. A valid signature of an unknown sender has at least the meaning that the sender belongs to that user group. But if that 'closed user group' is all mankind, then this meaning becomes useless. A digital signature is useful only if you know the sender of if you can tell from the signature that the sender belongs to a closed user group (e.g. is a citizen of some jurisdiction). But this is not the Alice&Bob model anymore. That's not what PKC is good for. There's another problem: Since e-mail does not require to forge mails from a particular identity, but from anyone, you run into the problem that there are plenty of unsecure keys floating around. When Alice keeps her key well protected, an Attacker has no chance. But for E-Mail, there is not just one Alice. There are about 500 Millions of users. Let's imagine that everyone has a public/secret key pair. How many of them use a Windows Computer vulnerable to the latest worm collecting all secrets from their computer? If only 0.2% of those keys were compromised, that's still 1 Million of secret keys available for spammers etc. Let's assume that this 1,000,000 keys were compromised within one year. That's an average of 2,700 keys a day. So the attackers/spammers/phishers have 2,700 fresh keys every day to forge e-mail, and most of the owners will not even realize that their key was stolen within that day. This is where reality and the science of cryptography differ. It does not work because not all attackers agree to play the Alice&Bob game. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Forensic: Who gave this crypto talk?
Hi, I have again one of these special, strange, freaky questions. I'm still investigating some "unusual activities" in science and cryptography. There are some handwritten notes, they seem to be some kind of transcript of slides from a talk about cryptography. I need to find out when, where, and by whom that talk was given. These notes already existed in the end of 1997, so the talk must have been given 1997 or before. The talk is about cryptography and system design theory. It is about 'layers', such as physics, electrical engieering, boolean functions, boolean circuits, algebra of polynomial power series, operating system, automata theory. It mentions an "access & authentication description language for a modified secure unix-pw protocol", and comes to the conlusion that "crypto can act as a system science". Gus Simmons is mentioned several times, but this might not have been part of the talk but a personal annotation of the person who made the transcript. Does anyone know about such a talk? (The notes are available at http://www.danisch.de/tmp/discussion.pdf ) regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: potential new IETF WG on anonymous IPSec
On Mon, Sep 13, 2004 at 02:41:21PM -0400, Sam Hartman wrote: > > >> No. opportunistic encryption means I have retrieved a key or > >> cert for the other party, but do not know whether it is > >> actually the right cert. > > Tim> If the key is retrieved from the other end of a TCP > Tim> connection (like vanilla ssh works the first time), is that > Tim> included within the definition of "opportunistic encryption"? > > Yes. Be careful. I believe that this is not as simple. It depends on what you use the key for. If it is used for encryption, then something like "opportunistic encryption" exists. After all, using an unverified key for encryption is not yet worse than using no encryption. So even if the key might be the attacker's one, nothing is lost compared to plain communication. But avoiding faked TCP resets is also a matter of authenticity. Does 'opportunistic authentication' exist? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Spam Spotlight on Reputation
On Mon, Sep 06, 2004 at 11:52:03AM -0600, R. A. Hettinga wrote: > > E-mail security company MX Logic Inc. will report this week that 10 percent > of all spam includes such SPF records, I have mentioned this problem more than a year ago in context of my RMX draft (SPF, CallerID and SenderID are based on RMX). Interestingly, nobody really cared about this major security problem. All RMX-derivatives block forged messages (more or less). But what happens if the attacker doesn't forge? That's a hard problem. And a problem known from the very beginning of the sender verifikation discussion. The last 17 month of work in ASRG (Anti Spam Research Group, IRTF) and MARID (Mail authorization records in DNS, IETF) are an excellent example of how to not design security protocols. This was all about marketing, commercial interests, patent claims, giving interviews, spreading wrong informations, underminding development, propaganda. It completely lacked proper protocol design, a precise specification of the attack to defend against, engineering of security mechanisms. It was a kind of religious war. And while people were busy with religious wars, spammers silently realized that this is not a real threat to spam. Actually, it sometimes was quite the opposite: I was told of some cases where MTAs were configured to run every mail through spam assassin. Spam assassin assigns a message a higher score if the sender had a valid SPF record. Since most senders with valid recors were the spammers, spam received a higher score than plain mail, which is obviously the opposite of security. People spent more time in marketing and public relations than in problem analysis and verifikation of the solution. That's the result. What can we learn from this? Designing security protocols requires a certain level of security skills and discipline in what you want to achieve. Although RMX/SPF/CallerID/SenderID does not make use of cryptography, similar problems can be sometimes found in context of cryptography. Knowing security primitives is not enough, you need to know how to assemble them to a security mechanism. Good lectures are given about the mathematical aspects of cryptography. But are there lectures about designing security protocols? I don't know of any yet. And there is a new kind of attack: Security protocols themselves can be hijacked and raped by patent claims. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Compression theory reference?
On Wed, Sep 01, 2004 at 04:02:02PM +1200, Peter Gutmann wrote: > > comp.compression FAQ, probably question #1 given the number of times this > comes up in the newsgroup. > > (I've just checked, it's question #9 in part 1. Question #73 in part 2 may > also be useful). Thanks, that's a pretty good hint, especially because it contains an explicit statement, and it's an FAQ, making it easy to show, that the university's claim is not just wrong, but silly. :-) regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Compression theory reference?
On Tue, Aug 31, 2004 at 05:07:30PM -0500, Matt Crawford wrote: > > Plus a string of log(N) bits telling you how many times to apply the > decompression function! > Uh-oh, now goes over the judge's head ... Yeah, I just posted a lengthy description why I think that this counterexample is not a counterexample. The problem is that if you ask for a string of log(N) bits, then someone else could take this as a proof that this actually works, because a string of log(N) bits is obviously shorter than the message of N bits, thus the compression scheme is working. Hooray! The problem is, that the number of iterations is not in the order of N, but in the order of 2^N, so it takes log2(around 2^N) = around N bits to store the number of iterations. The recursion convertes a message of N bit recursively into a message of 1 or zero bit length (to your taste), *and* a number which takes around N bit to be stored. Nothing is won. But proof that. This recursion game is far more complicated than it appears to be. Note also that storing a number takes in reality more than log(N) bits. Why? Because you don't know N in advance. We don't have any limit for the message length. So you'r counting register needs theoretically inifinte many bits. When you're finished you know how many bits your number took. But storing your number needs an end symbol or a tristate-bit (0,1,void). That's a common mistake. When determining the compression rate for a file people often forget, that some information is not stored in the file itself, but in the file system, e.g. the file length (telling where the compressed data stops) and the file name (telling you, that the file was compressed). That's basically the same problem. thanks and regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Compression theory reference?
On Tue, Aug 31, 2004 at 04:56:25PM -0400, John Denker wrote: > 4) Don't forget the _recursion_ argument. Take their favorite > algorithm (call it XX). If their claims are correct, XX should > be able to compress _anything_. That is, the output of XX > should _always_ be at least one bit shorter than the input. > Then the compound operation XX(XX(...)) should produce something > two bits shorter than the original input. If you start with a > N-bit message and apply the XX function N-1 times, you should be > able to compress each and every message down to a single bit. I have often heard that example and I used it myself several times. I do not use it anymore, because it is not that easy. There's a major flaw in this example, and therefore it is not really a counterexample. If the faculty found that flaw I'd be in a bad position. You could define some reversible bizarre function f that does exactly that job, i.e. for a given message m you need to apply f again and again and after some finite number of calculations you'll find that f(...f(m)...) = x where x = 0 or 1 So this is possible. Since the function is reversible, let's call the inverse function f', and you'll find m = f'( ... f'(x)...) where x is still 0 or 1 Ooops. What happened? Why does this work? Because the commonly used counterexample has a flaw. The reason is that you can invert f(...f(m)...) only if you count the number of times you applied f. You need to know the number of times in order to revert = decompress it, because you need to apply f' exactly that many times. You don't have any other stop condition. Applying f' is not a proper recursion, it's an iteration. So your information is actually stored in this number, not in 0 or 1. The output of the compression function is not 0 or 1, it is that hidden number telling how often you need to apply f to reach 0 or 1. So just give it as a contradiction that there can not be such a function because it could be recursively applied to result in a single bit is insufficient, it is not a contradiction. You need to consider the recursion depth and the inversion. But then it get's so complicated that most people don't understand it anymore. And the argument, that reaching a single bit recursively is a contradiction, is gone. You need to store a number. So what? Who says that this number isn't shorter that the plain message? And suddenly your back deep in theory. That's why I don't like that example. It's convincing at a first glance, but I don't believe that it is correct. > > 1) Get a few "expert witnesses" to certify that your position is > certainly not a personal fantasy. (I assume German jurisprudence > has something akin to the US notion of expert witnesses, right?) I did. Unfortunately I didn't find a german one, because it is very difficult to find a german professor witnessing against any other. It's a tight community. I found some outside Germany. But they didn't give me a paper with signature, just e-mails. Will see whether the court will accept that. I've sent those e-mails to the dean of the faculty of computer science to convince him that the faculty is wrong. As a result, he configured the mail relay of the faculty to drop any e-mail containing my last name anywhere in the header. It's ridiculous and I would laugh if it wouldn't be exactly the faculty that's said to be the best german faculty of computer science. > 2) Try to get the burden-of-proof reversed. Very difficult. I meanwhile became an expert in german examination law and it usually requires the examinee to proof that the examiners opinion is wrong. But since I already have proven several times that the university was lying intentionally to the court, they might take that into consideration. After all, I have brought this forward, and I have done my duty. Now it should be up to the university to respond. They didn't comment for more than four years now. > The opposition > are claiming that known algorithms do the job. Get them to be > specific about which algorithms they are referring to. Then > file with the court some disks, each containing 1.44 megabytes > of data. They say LZW and MTF. I have already given an example for LZW. They don't care. I've told them to take any random string taken from /dev/random under Linux. They don't care. The german principle is that a faculty is always right by definition. > 3) Diagram the pigeon-hole argument for the judge. See > diagrams below. I'll try that. Thanks. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Compression theory reference?
Hi, I need a literature reference for a simple problem of encoding/compression theory: It can be easily shown that there is no lossless compression method which can effectively compress every possible input. Proof is easy: In a first step, consider all possible messages of length n bit, n>0. There are 2^n different ones. But there are only (2^n)-1 shorter messages, so there is no injektive encoding to encode all messages into a shorter one. And then all codewords of length 100%. But a non-computer science person does not understand that. Does anybody know a book about coding theory which explicitely states the impossibility of such a compression method in plain language? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
cryptograph(y|er) jokes?
Hi, does anyone know good jokes about cryptography, cryptographers, or security? regards Hadmut [Moderator's note: I know of several security systems that are jokes in and of themselves, but that doesn't seem to be what you meant. :) --Perry] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: The future of security
On Mon, Apr 26, 2004 at 08:21:43PM +0100, Graeme Burnett wrote: > > Would anyone there have any good predictions on how > cryptography is going to unfold in the next few years > or so? I have my own ideas, but I would love > to see what others see in the crystal ball. My guess is that it is unpredictable. As so many other things, it depends on so many coincidences, marketing, politics. But what I do expect: - I don't expect that there will be much progress in maths and theory of cryptography. Very few inventions will make it out of the ivory tower, if any at all. Key lenghts will increase. We'll play RSA with 4096 or 8192 bit. They will find that Quantum Computers may be fast, but still bound to computation complexity. - SSL/TLS will become even more of a de facto standard in open source software and (new?) protocols. It will make it's way into the standard libraries of programming languages (e.g. as it did for Ruby). - I don't expect that we'll ever have a common PKI for common people with a significant distribution. It's like with today's HTTPS: The big ones have commercial certificates, plain people use passwords and simple authentication mechanisms (like receiving a URL with a random number by e-mail). - I guess the most important crypto applications will be: - HTTPS of course - portable storage equipped with symmetric ciphers such as USB-Sticks and portable hard disks. - VPN routers - Voice over IP - DRM - maybe in digital passports and credit cards - simple auth tokens like RSA SecurID, Aladdin eToken will become more commonly used. - As a consequence, I guess that politicians will reopen the 1997's discussion of prohibiting strong encryption. They already do. - Maybe we'll have less crypto security in future than we have today. 5-10 years ago I knew much more people using PGP than today. Most modern mail user agents are capable of S/MIME, but it's hard to find someone making use of it. I'm a consultant for many companies, but not a single one of them uses it. Most modern MTAs support TLS, but to my knowledge less than 3% of messages are actually TLS encrypted in SMTP. It's strange, but law will become more important than cryptograpy. As a summary, I don't expect any innovations. Not more than within the last 10 years. But I'm pretty sure that security will be more and more important and that's were I expect innovations and progress. Security doesn't necessarily mean cryptography. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Do Cryptographers burn?
On Sat, Apr 03, 2004 at 11:49:15PM +0100, Dave Howe wrote: > > If you mean he gave a false assurance of the security of a product for a > friend - why would he do that? I can't think of any of my friends who would > want me to tell them sofware was secure if it wasn't. ... > I suppose that depends on his integrity and how much his reputation and > skill would be worth to his employers if it became known that he gave false > assurances - and it would only be a matter of time before some other > cryptoanalyst found the fault he found and ignored. Thanks for the opinions. Maybe I'll explain a little bit more about the background: As some already may have heard I'm in a legal dispute with a german University. I wrote a dissertation in 1998, and the supervisor announced to give a good rate. I then signed off from the job as an assistant effectively to the date of the examination. I didn't know that the supervisor and another professor had made a plan to implement a security infrastrukture for the faculty and to found a company, and that this plan included that I would do the work in the year after the examination. When I signed off, they couldn't fulfill the promises they gave to the faculty, and thus canceled the examination to extort me to stay at the university and do the implementation. I refused to pay that kind of "protection money" and thus they rejected my dissertation with false expertises. The advisor's expertise (who claims to be one of the world's top cryptographers) is just a concatenation of arbitrary nonsense, and wrong even in the basics of computer science. E.g. he claims that LZ and MTF would effectively compress just anything. As an example for the need to distinguish between payload and control information I said that when phoning, not only speech is to be transmitted, but also phone numbers and signals about termination of the connection. He rated this as completely wrong and giving wrong information, because phone numbers would be used with today's ISDN Telephones only. As the reason he gave an obituary in the London Times saying that Donald Davies had died. Or he blames me for not citing literature that hadn't been published when I submitted the dissertation. He claims that rate-distortion theory and shannon encoding allow to pack n+1 independant bits into a single message of n bits (even with small n or n=1. Just try to do it.). The second examiner said the dissertation would be completely wrong but denied to give any explanation. I filed a lawsuit. During the law suit, the university had informed me, that they would never accept me to succeed in the examination. They would abuse a gap in german examination law: courts are restricted to cancel bad or wrong examinations, but they cannot give a positive examination result. All they can do is to sentence the University to repeat the examination. The University informed me that they had decided that they do not wish me to work in science and thus I had to accept to fail in the examination. I would have to modify my dissertation and to include those mistakes the examiners had falsely claimed in order to confirm that their rejection was correct. If I do that I would be allowed to have a second try with a new dissertation and would receive a bad grade which would keep me out of science. If I do not agree, they announced to keep me in an endless loop of false expertises. Every single one will take me years to sue against. I refused that "deal". I won both at the administration court and the appelate administration court. The latter one found that the second examiner could never have read the largest chapter and didn't even open the pages of the dissertation. This was already sufficient to cancel the examination action. The University then retracted the action to avoid being sentenced. Obviously, this was an extreme disgrace for the University. The University had to give a new second expertise. If this expertise could not confirm what the first expertise said, that the dissertation was completely wrong, the advisor would face beeing fired, severe compensation claims, and the ultimate disgrace. Within less then two weeks the University managed to get a third rejecting expertise, this time from a professor outside Germany, who is indeed known as one of the top cryptographers and a member of the board of directors of the IACR. I filed a new lawsuit and could easily prove that this professor had intentionally given a wrong expertise (obviously to protect the supervisor from legal trouble): - He wrote the expertise in less than two days. - The expertise is less than a page. He does not give any reasons and claims that he cannot be expected to reason his expertise. Reasoning is a strong requirement under german law. - There is no "link" between the expertise and the dissertation. He obviously didn't read it. - He didn't find any single mistake. He just says that everything is already known and taken from literatur
Do Cryptographers burn?
Hi, this is not a technical question, but a rather academic or abstract one: Do Cryptographers burn? Cryptography is a lot about math, information theory, proofs, etc. But there's a certain level where all this is too complicated and time-consuming to follow all those theories and claims. At a certain point cryptography is based on trusting the experts. Is anyone here on this list who can claim to have read and understood all those publications about cryptography? Is anyone here who can definitely tell whether the factorization and discrete logarithm problems are hard or not? Today's cryptography is to a certain degree based on trusting a handful of experts, maybe the world's top 100 (300? 1000?) in cryptography. Does this require those people to be trustworthy? What if a cryptographer is found to intentionally have given a false expertise in cryptography and security just to do a colleague a favor, when he erroneously assumed the expertise would be kept secret? Would such a cryptographer be considered as burned? Wouldn't he give more false expertises once he's getting paid for or asked by his government? I'd be interested in your opinions. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Canon's Image Data Verification Kit DVK-E2 ?
Hi, Canon provides a so called Data Verification Kit which allegedly allows to detect whether a digital image has been tampered with since it has been taken with a digital camera. I found the announcement at http://www.dpreview.com/news/0401/04012903canondvke2.asp They say: How it works The kit consists of a dedicated SM (secure mobile) card reader/writer and verification software. When the appropriate function (Personal Function 31) on the EOS-1D Mark II or EOS-1Ds is activated, a code based on the image contents is generated and appended to the image. When the image is viewed, the data verification software determines the code for the image and compares it with the attached code. If the image contents have been manipulated in any way, the codes will not match and the image cannot be verified as the original. So some kind of hash code or digital signature is generated. Does anybody know details about this? I never heard that there are digital mass market cameras which could generate digital signatures. But if the signature is generated inside the SM card only, why should the PC where the image was modified be unable to write the modified image the same way as a digital camera writes an unmodified one? (And, btw., how do they detect that the picture was taken at a real scene and is not a repro of a modified and printed picture?) I guess the secure mobile card generates some signature and they presume that the attacker would not have access to the memory card. This would start to protect the image not from the moment it had been taken, but from the moment when it was copied from the card to other media. And it would require to trust the photographer. Is there a technical description of those secure mobile cards available? I didn't find any details, just marketing blabla. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
OOAPI-SSL/TLS (Was: Simple SSL/TLS - Some Questions)
On Fri, Oct 03, 2003 at 05:55:25PM +0100, Jill Ramonsky wrote: > Having been greatly encouraged by people on this list to go ahead with a > new SSL implementation, That's a pretty good idea, I also encourage you (and volunteer to support). > The main > point of confusion/contention right now seem to be (1) should it be in C > or C++?, I definitely vote for C++ for several reasons. You already mentioned plenty of reasons yourself, the security advantages of C++. But be warned: In contrast to modern scripting languages C++ is not automatically immune against buffer overruns etc. It takes some discipline to have a good programming style in C++. The main advantage I see is the oportunity to have a good, object oriented design of the API to give an example of a good and usable Crypto API. Everyone here has his own favourite language, I meanwhile prefer Ruby. I had to write a CA some months ago and didn't find a good language with SSL and Certificate management support, except for Ruby. Michal Rokos <[EMAIL PROTECTED]> was currently writing the glue code to use the openssl library with ruby, and I found it very comfortable to use SSL from a scripting language. It was however a big heap of debugging, reading the openssl API and source code, discussing requirements with Michal, ask him for extensions etc., since it is quite difficult, to implement all features of openssl, and many of them are not logical. This project showed the shortcomings of openssl, it is not really a usable and complete software. This causes insecurity, because it is too difficult for application writers to use it and to support all features. I'd therefore propose the following: To design two (ore more) object oriented APIs for - cryptographic primitives - non-communication oriented functions (key and certificate management, S/MIME message handling, ...) - communication oriented functions (SSL/TLS) but to not stick too tight to C++. The design must be applicable to all modern object oriented languages. Then do a C++ implementation of the API (spell: header files) and see, whether this is possible without tricks. Also have the API defined in other languages such as Python, Ruby, Java,... Take care that the design is easy to read, easy to understand, easy to debug. Make use of object oriented design where possible. Now implement the library itself in C++, while others write the glue code for other languages simultaneously. As a result, there will be a language-independend object-oriented Meta-API, describing the library virtually for all languages. For every supported language there is a "translated API" of this and a library to use. For C++, this is a genuine library, for other languages this will be glue code + the C++ library. This would be a step to bring secure programming a step forward towards modern programming, and to ease and support use of SSL/TLS/... I am currently quite happy with the way Michal Rokos wrapped openssl into an object oriented API, but it would be good to have this in more languages, it still allows improvements and is still incomplete. regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: quantum hype
On Sat, Sep 13, 2003 at 09:06:56PM +, David Wagner wrote: > > You're absolutely right. Quantum cryptography *assumes* that you > have an authentic, untamperable channel between sender and receiver. So as a result, Quantum cryptography depends on the known methods to provide authenticity and integrity. Thus it can not be any stronger than the known methods. Since the known methods are basically the same a for confidentiality (DLP, Factoring), and authentic channels can be turned into confidential channels by the same methods (e.g. DH), Quantum cryptography can not be stronger than known methods, I guess. On the other hand, quantum cryptography is based on several assumptions. Is there any proof that the polarisation of a photon can be read only once and only if you know how to turn your detector? AFAIK quantum cryptography completey lacks the binding to an identity of the receiver. Even if it is true that just a single receiver can read the information, it is still unknown, _who_ it is. All you know is that you send information which can be read by a single receiver only. And you hope that this receiver was the good guy. Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: invoicing with PKI
On Mon, Sep 01, 2003 at 12:23:28PM -0400, Ian Grigg wrote: > > The dream of PKI seems to revolve around these major areas: > > 1. invoicing, contracting - no known instances > 2. authentication and authorisation - SSL client > side certs deployed within organisations. > 3. payments > 4. channel security (SSL) > 5. email (OpenPGP, S/MIME) > > In terms of actual deployed PKIs, the only significant > cases that I know of, deployed outside of organisations > and in widespread use are: > >HTTPS (141k, see below), and >OpenPGP ("millions" says PGP Inc, so let's call it 100k or so). > The reason I was asking is: I had a dispute with someone who claimed that cryptography is by far the most important discipline of information and communication security, and that its transition from an art to a science was triggered by Shannon's paper in 1949 and the Diffie/Hellman paper in 1976 (discovery of public key systems). Reality is different: While Firewalls, Content Filters (Virus/Spam/ Porn filters), IDS, High availability systems, etc. become more and more important, encryption and signatures, especially based on PKIs, don't seem to get more relevant (except for HTTPS/TLS). There was an interesting speech held on the Usenix conference by Eric Rescorla (http://www.rtfm.com/TooSecure-usenix.pdf, unfortunately I did not have the time to visit the conference) about cryptographic (real world) protocols and why they failed to improve security. From the logfiles I've visited I'd estimate that more than 97% of SMTP relays do not use TLS at all, not even the oportunistic mode without PKI. I actually know many companies who can live pretty well and secure without cryptography, but not without a firewall and content filters. But many people still insist on the claim that cryptography is by far the most important and only scientific form of network security. Is cryptography where security took the wrong branch? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: invoicing with PKI
Hi, On Thu, Jul 17, 2003 at 04:27:52PM -0400, Ian Grigg wrote: > Does anyone know any instances of invoicing and > contracting systems that use PKI and digital orders? > > That is, purchasing departments and selling departments > communicating with digitally signed contracts, purchase > orders, delivery confirmations and so forth. > > And, the normal skeptical followup question, do they > work, in the sense of delivering ROI, or are they just > hopeful trials? > Beyond invoicing/contracting, which applications of PKI in e-business or related areas are there anyway? (except for the standard tools SSL, X.509,...) Is there a survey of where in e-business cryptography is actually being used between customers and providers? How many shops do use SET for payment? regards Hadmut - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]