Re: [Cryptography] encoding formats should not be committee'ized
On 2013-10-03 09:49, Peter Gutmann wrote: Jerry Leichter leich...@lrw.com writes: My favorite more recent example of the pitfalls is TL1, a language and protocol used to managed high-end telecom equipment. TL1 has a completely rigorous syntax definition, but is supposed to be readable. For those not familiar with TL1, supposed to be readable here means encoded in ASCII rather than binary. It's about as readable as EDIFACT and HL7. Then that puts it in the same category as HBCI version 1. Sure, it was rigorous. Sure, it was unambiguous. Sure, it was ASCII-encoded. But human-readable? I implemented that protocol once, and can assert that, after reading more HBCI messages than was probably good for me, I felt decidedly less than human. Fun, Stephan ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] The paranoid approach to crypto-plumbing
On 2013-09-17 07:37, Peter Gutmann wrote: Tony Arcieri basc...@gmail.com writes: On Mon, Sep 16, 2013 at 9:44 AM, Bill Frantz fra...@pwpconsult.com wrote: After Rijndael was selected as AES, someone suggested the really paranoid should super encrypt with all 5 finalests [...]. I wish there was a term for this sort of design in encryption systems beyond just defense in depth. AFAICT there is not such a term. How about the Failsafe Principle? ;) How about Stannomillinery? I like Stannopilosery better, but the first half is a keeper. Or, perhaps a bit incongruously, Stannopsaffery. Fun, Stephan ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Hashes into Ciphers (was Re: FIPS, NIST and ITAR questions)
On 2013-09-04 16:37, Perry E. Metzger wrote: Phil Karn described a construction for turning any hash function into the core of a Feistel cipher in 1991. So far as I can tell, such ciphers are actually quite secure, though impractically slow. Pointers to his original sci.crypt posting would be appreciated, I wasn't able to find it with a quick search. I remember having reviewed a construction by Peter Gutmann, called a Message Digest Cipher, at around that time, which also turned a hash function into a cipher. I do remember that at that time I thought it was quite secure, but I was just a little puppy then. Schneier reviews this construction in Applied Cryptography and can't find fault with it, but doesn't like it on principle (using the hash function for something for which it is not intended). It works like this. Let h be the incremental hash function, i.e., the compression function that you use to hash data piecewise. In programming terms, this function is usually called XXXUpdate() if XXX is the name of the hash function. Then, if P(1), ..., P(n) are your plaintext blocks and K is your key, compute: C(1) = P(1) XOR h(IV, K) C(j) = P(j) XOR h(C(j-1), K), for 1 j = n. Decryption is a very similar operation: P(1) = C(1) XOR h(IV, K) P(j) = C(j) XOR h(C(j-1), K), for 1 j = n. It's just running the compression function in CFB mode. Fun, Stephan ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: A slight modification of my comments on PKI.
On Jul 29, 2010, at 22:23, Anne Lynn Wheeler wrote: On 07/28/2010 10:34 PM, d...@geer.org wrote: The design goal for any security system is that the number of failures is small but non-zero, i.e., N0. If the number of failures is zero, there is no way to disambiguate good luck from spending too much. Calibration requires differing outcomes. Regulatory compliance, on the other hand, stipulates N==0 failures and is thus neither calibratable nor cost effective. Whether the cure is worse than the disease is an exercise for the reader. another design goal for any security system might be security proportional to risk. Warning: self-promotion (well, rather: project promotion) ahead. This is exactly what we are trying to do in an EU project in which I'm involved. The project, called MASTER, is more concerned with regulatory compliance than security, even though security of course plays a large role. The insight is that complex systems will probably never have N = 0 (in Dan's terms), so we will have to calibrate the controls so that the N becomes commensurate with the risk. To do this, we have two main tools: First, there is a methodology that describes in detail how to break down your high-level regulatory goals (which we call control objectives) into actionable pieces. This breakdown tells you exactly what you need to control, and how. It is controlled by risk analysis, so you can say at any point why you made certain decisions, and conversely, if a regulation changes, you know exactly which parts of your processes are affected (assuming the risk analysis doesn't have to be completely redone as part of the regulatory change). Second, as part of this breakdown process, you define, for each broken-down control objective, indicators. These are metrics that indicate (1) whether the process part you are currently looking at is compliant (i.e., has low enough N), and (2) whether this low N is pure luck or the result of well-placed and correctly functioning controls. One benefit of having indicators at every level of breakdown is that you get metrics that mean something *at this level*. For example, at the lowest level, you might get number of workstations with outdated virus signatures, while at the top you might get money spent in the last year on lawsuits asserting a breach of privacy. This forces one to do what Andrew Jaquith calls contextualisation in his book, and prevents the approach sadly taken by so many risk analysis papers, namely simply propagating risk values from the leaves of a risk tree to the root using some propagation rule, leaving the root with a beautifully computed, but sadly irrelevant, number. Another benefit is that if some indicator is out of some allowed band, the remedy will usually be obvious to a person working with that indicator. In other words, our indicators are actionable. The question of whether the cure is worse than the disease can't be settled definitively by us. We have done some evaluation of our approach, and preliminary results seem to indicate that users like it. (This is said with all the grains of salt usually associated with preliminary user studies.) How much it costs to deploy is unknown, since the result of our project will be a prototype rather than an industrial-strength product, but our approach allows you to deploy only parts. Best, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Against Rekeying
On Mar 23, 2010, at 22:42, Jon Callas wrote: If you need to rekey, tear down the SSL connection and make a new one. There should be a higher level construct in the application that abstracts the two connections into one session. ... which will have its own subtleties and hence probability of failure. Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Possibly questionable security decisions in DNS root management
On Oct 22, 2009, at 16:12, Perry E. Metzger wrote: I don't think anyone is smart enough to understand all the implications of this across all the systems that depend on the DNS, especially as we start to trust the DNS because of the authentication. We trust the DNS already. As far as I can follow the discussion, that's part of the problem. Fun, Stephan PS: If your point is that DNSSEC will not solve the problem, I agree. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: [Barker, Elaine B.] NIST Publication Announcements
On Oct 1, 2009, at 16:46, Perry E. Metzger wrote: It is also completely impossible to prove you've deleted a record. Someone who can read the record can always make a copy of it. Cryptography can't fix the DRM problem. Sorry, I should have clarified that. We don't want to verify that Bob has in fact deleted the patient record, we just want to verify whether Bob *claims* to have deleted the patient record *within the time span given*. If Alice later finds out that Bob has lied, she will have this signed claim, with which she can take him to court. Best, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: [Barker, Elaine B.] NIST Publication Announcements
On Sep 30, 2009, at 06:25, Peter Gutmann wrote: Stephan Neuhaus neuh...@st.cs.uni-sb.de writes: Is there something that could be done that would *not* require a TTA? (I have almost given up on this, but it doesn't hurt to ask.) I think you've abstracted away too much information to provide a definite answer, but if all you want is a proof of something being done at time X that'll stand up in court then what's wrong with going to a notary? This has worked just fine for... centuries? without requiring the pile of Rube-Goldberg cryptoplumbing that people seem to want to attach to it. In this case, it's because Alice and Bob are not people, but services in an SOA, dynamically negotiating a variation of an SLA. If that SLA specifies, for example, that patient records must be deleted within three days of checking the patient out of the hospital, then it will be somewhat impractical to go to a notary public every time they delete a patient's record. I completely agree with your sentiment that cryptoplumbing should not be used when there are other working solutions, but in this case, I think it will be unavoidable. Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: [Barker, Elaine B.] NIST Publication Announcements
On Sep 26, 2009, at 18:31, Perry E. Metzger wrote: SP 800-102 is intended to address the timeliness of the digital signatures generated using the techniques specified in Federal Information Processing Standard (FIPS) 186-3. [...] SP 800-102 provides methods of obtaining assurance of the time of digital signature generation using a trusted timestamp authority that is trusted by both the signatory and the verifier. In the project in which I am involved we have just this problem, but we also have the problem that we can't require the participating parties to use a TTA. I have been attacking this problem from several angles but have not come to a solution. The setup is this: Alice advertises that she wants a job done. One of the constraints is that she wants it done by tomorrow, 10am. A number of Bobs apply for the job. Alice trusts none of the Bobs and the Bobs do not trust Alice. Alice doesn't even know the Bobs beforehand. Based on some criterion, Alice chooses a particular Bob. For business reasons, Alice can't force Bob to use a particular TTA, and it's also impossible to stipulate a particular TTA as part of the job description (the reason is that Alice and the Bobsgreat band name BTW---won't agree to trust any particular TTA and also don't want to operate their own). Is there something that could be done that would *not* require a TTA? (I have almost given up on this, but it doesn't hurt to ask.) Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Source for Skype Trojan released
On Aug 31, 2009, at 13:20, Jerry Leichter wrote: It can “...intercept all audio data coming and going to the Skype process.” Interesting, but is this a novel idea? As far as I can see, the process intercepts the audio before it reaches Skype and after it has left Skype. Isn't that the same as calling a keylogger a PGP Trojan? Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: combining entropy
On Oct 24, 2008, at 14:29, John Denker wrote: On 09/29/2008 05:13 AM, IanG wrote: My assumptions are: * I trust no single source of Random Numbers. * I trust at least one source of all the sources. * no particular difficulty with lossy combination. If I have N pools of entropy (all same size X) and I pool them together with XOR, is that as good as it gets? Yes. The second assumption suffices to prove the result, since (random bit) XOR (anything) is random. Ah, but for this to hold, you will also have to assume that the N pools are all independent. If they are not, you cannot even guarantee one single bit of entropy (whatever that is). For example, if N = 2, your trusted source is pool 1, and I can read pool 1 and control pool 2, I set pool 2 = pool 1, and all you get is zeros. And that surely does not contain X bits of entropy for any reasonable definition of entropy. Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Randomness testing Was: On the randomness of DNS
On Aug 3, 2008, at 13:54, Alexander Klimov wrote: If your p-value is smaller than the significance level (say, 1%) you should repeat the test with different data and see if the test persistently fails or it was just a fluke. Or better still, make many tests and see if your p-values are uniformly distributed in (0,1). [Hint: decide on a p-value for that last equidistribution test *before* you compute that p-value.] Best, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
German banks liable for phishing (really: keylogging) attacks
This article: http://www.spiegel.de/wirtschaft/0,1518,563606,00.html (sorry, German only) describes a judgment made by a German district court which says that banks are liable for damages due to phishing attacks. In the case in question, a customer was the victim of a keylogger even though he had the latest anti-virus software installed, and lost 4000 Euro. The court ruled that the bank was liable because the remittance in question had demonstrably not been made by the customer and therefore the bank had to take the risk. Even though phishing and keylogging are not really related, this ruling is remarkable because courts had almost always ruled in favor of the banks in the past. So it could set an important precedence. Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: The wisdom of the ill informed
On Jul 1, 2008, at 17:39, Perry E. Metzger wrote: Ed, there is a reason no one in the US, not even Wells Fargo which you falsely cited, does what you suggest. None of them use 4 digit PINs, none of them use customer account numbers as account names. (It is possible SOMEONE out there does this, but I'm not aware of it.) Many German savings banks use account numbers as account names (see, e.g., https://bankingportal.stadtsparkasse-kaiserslautern.de/banking/) http://www.stadtsparkasse-kaiserslautern.de ), as does, for example, the Saarländische Landesbank (https://banking.saarlb.de/cgi/anfang.cgi ). Most will not use 4-digit PINs, though. I understand some European banks even do stuff like mailing people cards with one time passwords. Do you mean TANs (TransAction Numbers)? TANs are used to authorize transactions that could affect your account balance. So stealing the PIN will let you look at the balance, but will not let you steal money (through this channel). (Or maybe you knew all this already and I just missed the irony.) Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: defending against evil in all layers of hardware and software
On Apr 28, 2008, at 23:56, Perry E. Metzger wrote: If you have a rotten apple engineer, he will be able to hide what he's trying to do and make it look completely legit. If he's really good, it may not be possible to catch what he's done EVEN IN PRINCIPLE. Fred Cohen proved in 1984 in his Computer Viruses, Theory and Experiments[1] that Program P is a virus is undecidable. I assume that this result can be applied to hardware in the form that Chip C contains malicious gates is also undecidable. (Caveat: Cohen seems to make the fundamental assumption that there is no fundamental distinction between code and data, something that need not necessarily hold everywhere inside a computer chip.) Fun, Stephan [1] See for example http://vx.netlux.org/lib/afc01.html - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: crypto class design
On Dec 17, 2007, at 17:38, [EMAIL PROTECTED] wrote: So... supposing I was going to design a crypto library for use within a financial organization, which mostly deals with credit card numbers and bank accounts, and wanted to create an API for use by developers, does anyone have any advice on it? The one thing that I think is most important is not to use the bunch of functions approach, but rather an integrated approach that directly supports the use cases and protects against misuse. Intend to skim the OpenSSL design and Gutmann's Design of a Cryptographic Security Architecture for ideas. There you have examples of both approaches. Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: The bank fraud blame game
Peter Gutmann wrote: Given that all you need for this is a glorified pocket calculator, you could (in large enough quantities) probably get it made for $10, provided you shot anyone who tried to introduce product-deployment DoS mechanisms like smart cards and EMV into the picture. That seems exactly to be the problem. Germany's e-health card would be a prime candidate for technology that could boost the use of such pinpads, but unfortunately the card will contain a smart card. (The device has a host of other problems too, don't get me started.) Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Free Rootkit with Every New Intel Machine
Peter Gutmann wrote: -- Snip -- This is very scary. I bet that our Minister of the Interior would love it, though, since he has been pushing a scheme for stealth examination of suspects' computers (called Federal Trojan). Technology like this would be a large first step towards making this possible. [...] - Built in web interface on every machine (port 16994) Apart from all the other things that are wrong with this scheme, * you can't trust the output of netstat anymore; * in other words, what you see with netstat may not be the same as what someone else sees with nmap; and * if the web interface has a vulnerability, you have an unshutdownable vulnerable service running on your machine. Fun, Stephan - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
German CA TrustCenter insolvent
Original article at http://www.heise.de/security/news/meldung/64224 It seems that the German TC TrustCenter GmbH (formerly TC TrustCenter AG) is now insolvent. TrustCenter was accredited to issue qualified signatures, which is what you need in Germany if you want your digital signature to be as binding as your handwritten one. It is as yet unclear why TrustCenter ran out of money, but the fact that German banks sold their TrustCenter stocks to BeTrusted (now part of Cybertrust) in 2004 shows that the banks had lost their confidence in PKI. An interesting question is of course what happens with TrustCenter's private keys. Are they being auctioned off to the highest bidder? Fun, Stephan begin:vcard fn:Stephan Neuhaus n:Neuhaus;Stephan org;quoted-printable:Universit=C3=A4t des Saarlandes;Department of Informatics adr;quoted-printable:;;Postfach 15 11 50;Saarbr=C3=BCcken;;66041;Germany email;internet:[EMAIL PROTECTED] title:Researcher tel;work:+49-681/302-64018 tel;fax:+49-681/302-64012 x-mozilla-html:FALSE url:http://www.st.cs.uni-sb.de/~neuhaus version:2.1 end:vcard
Re: Another entry in the internet security hall of shame....
Peter Gutmann wrote: Alaric Dailey [EMAIL PROTECTED] writes: In my opinion, PSK has the same problems as all symmetric encryption, its great if you can share the secret securely, but distribution to the masses makes it infeasible. Exactly, PSK's are infeasible, and all those thousands of web sites that have successfully employed them for a decade or more are all just figments of our imagination. By extension, ATMs are also infeasible. I don't know about New Zealand, but in Germany, ATM PINs (and homebanking TAN lists) are sent in special envelopes that you can't see through, even when holding them against a light. That's exactly the sort of distribution method that would be needed for PSKs to have desirable security properties and to make them feasible, and that's exactly the distribution method that Joe's Used Condoms can't use because it's too expensive. Also, it would preclude doing business with someone you don't already know. Also, phishing isn't done on all those thousands of web sites that have successfully employed [passwords] for a decade or more; it's just done on those where there's money to be had. Where it's done, it very often works. How is that a successfuly employed security model? Sarcasm aside for a minute, several people have responded to the PSK thread with the standard passwords don't work, whine moan complain response that security people are expected to give whenever passwords are mentioned. It's all the user's fault, they should learn how to use PKI. I think you're talking about me here, so I think I should clear some things up. First of all, I don't think that users should learn how to use PKI. I don't use PKI (much) because I think it's too bloody complicated, and I am certainly an educated user. I wouldn't dare foist PKI on uneducated users. (There is a great parody by Stenkelfeld, a German radio comedy show, about the difficult HBCI procedure then in use at Haspa, the largest German savings bank. It's in German, but I can get you an MP3 if you want. And there isn't even that much I in HBCI's PKI.) But I'm no expert on PKI, so I asked a question instead, namely whether PKI wasn't going to make it for the web. Second, I also didn't say that passwords didn't *work*, I said that they had *storage and management issues* that certificates did not have and that their deployment would be problematic because of that, and I stand by that. The reason for my opinion has nothing to do with any knee-jerk standard reaction in relation to passwords, except perhaps for the problem of transferring them securely; see above. (I think the problem is real under many threat models; you may disagree.) Rather, it is my impression that a switch to TLS-PSK would not just be a client-side thing, but that server code would have to be changed also, and that it is this issue which will prevent widespread deployment of TLS-PSK. This has nothing to do with what users want or can do, and it has nothing to do with the technical feasibility of passwords. The failing is in the security community. We completely agree. We have failed to produce practical and secure solutions. To repeat, I especially agree that PKI is a solution in search of a problem, and that it's not practical for web commerce. I also agree that password authentication is not inherently poor, and if we could turn the clock back ten years, that's what we should do. I also agree that passord-based authentication was trivial to implement---ten years ago! Today it's not going to be anyway near trivial. Here's my proposal for an unmistakable TLS-PSK based authentication mechanism for a browser: [...] If I were a phisher, I'd set up a web site having normal text boxes for username and password. On it, I'd put a link why isn't the URL bar blue? and use some technical mumbo-jumbo about how for technical reasons, the feature needed to be disabled in the browser, but that the passwords were of course secure (there was a posting on this list to the effect that a bank actually did this or something very similar). Or maybe that this particular browser isn't supported with TLS-PSK (DiBa doesn't support anything but IE, for example, and logins will mysteriously fail if attempted with any other browser). I bet that'd work, no matter how unspoofable the TLS-PSK password entry were. It doesn't solve *all* phishing problems, but it's a darn sight better than the mess we're in now. OK, I'm willing to concede that I probably don't understand many of the issues, technical or otherwise, and that I don't have a solution to offer myself, so I'll shut my trap (except if directly challenged, or in private email) until someone has made a decent try to get browser makers to support both TLS-PSK and to include unspoofable password entry methods. Then we'll see how merchants react to this and what the ultimate consequences are. Fun, Stephan begin:vcard fn:Stephan
Re: Another entry in the internet security hall of shame....
James A. Donald wrote: But does not, in fact, prevent. Let me rephrase that. Are we now at a point where we must admit that PKI isn't going to happen for the Web and that we therefore must face the rewriting of an unknown (but presumably large) number of lines of code to accomodate PSKs? If that's so, I believe that PSKs will have deployment problems as large as PKI's that will prevent their widespread acceptance. That's because PSKs (as I have understood them) have storage and management issues that CA certificates don't have, four of which are that there will be a lot more PSKs than CA certificates, that you can't preinstall them in browsers, that the issue of how to exchange PSKs securely in the first place is left as an exercise for the reader (good luck!), and that there is a revocation problem. To resolve any of those issues, code will need to be written, both on the client side and on the server side (except for the secure exchange of PSKs, which is IMHO unresolvable without changes to the business workflow). The client side code is manageable, because the code will be used by many people so that it may be worthwhile to spend the effort. But the server side? There are many more server applications than there are different Web browsers, and each one would have to be changed. At the very least, they'd need an administrative interface to enter and delete PSKs. That means that supporting PSKs is going to cost the businesses money (both to change their code and to change their workflow), money that they'd rather not spend on something that they probably perceive as the customer's (i.e., not their) problem, namely phishing. Some German banks put warnings on their web pages that they'll never ask you for private information such as passwords. SaarLB (http://www.saarlb.de) even urges you to check the certificate fingerprint and provides well-written instructions on how to do that. In return, they'll assume no responsibility if someone phishes your PIN and TANs. They might, out of goodwill, reimburse you. Then again, they might not. I believe that SaarLB could win in court. So where is the incentive for SaarLB to spend the money for PSK support? Fun, Stephan begin:vcard fn:Stephan Neuhaus n:Neuhaus;Stephan org;quoted-printable:Universit=C3=A4t des Saarlandes;Department of Informatics adr;quoted-printable:;;Postfach 15 11 50;Saarbr=C3=BCcken;;66041;Germany email;internet:[EMAIL PROTECTED] title:Researcher tel;work:+49-681/302-64018 tel;fax:+49-681/302-64012 x-mozilla-html:FALSE url:http://www.st.cs.uni-sb.de/~neuhaus version:2.1 end:vcard
Re: Another entry in the internet security hall of shame....
Peter Gutmann wrote: And that's it's killer feature: Although you can still be duped into handing out your password to a fake site, you simply cannot connect securely without prior mutual authentication of client and server if TLS-PSK is used. If I have understood the draft correctly, using PSKs means that the server and the client have a shared secret that they must communicate securely beforehand, and that they use some form of ZKP to assure the other party that they know that secret without revealing it. If that's indeed so, wouldn't this have key management and storage issues that PK was designed to prevent in the first place? Also, the prior secure exchange of secrets would seem to preclude communication between entities that don't know each other. That, however, is how many businesses (including ebay, in whose name much phishing spam is generated) operate. Additionally, I don't think that this is just a UI issue; after all, both the client and the server must somehow manage the PSKs. There are probably expiration and revocation problems: what if my computer gets stolen and I can't get at my PSK? Does this mean that I can't do business with my bank anymore? What if I suspect that someone has stolen my PSK (for example with the same javascript attack that phished my password)? And so on and so on. I'm not saying that the idea is bad, far from it; I'm just saying that there are probably many practical problems to be solved before this can be widely deployed. Or perhaps I haven't understood the draft correctly. What'd be necessary in conjunction with this is two small changes to the browser UI: ...and the PSK management code in the server and in the client. Fun, Stephan begin:vcard fn:Stephan Neuhaus n:Neuhaus;Stephan org;quoted-printable:Universit=C3=A4t des Saarlandes;Department of Informatics adr;quoted-printable:;;Postfach 15 11 50;Saarbr=C3=BCcken;;66041;Germany email;internet:[EMAIL PROTECTED] title:Researcher tel;work:+49-681/302-64018 tel;fax:+49-681/302-64012 x-mozilla-html:FALSE url:http://www.st.cs.uni-sb.de/~neuhaus version:2.1 end:vcard
Re: AES cache timing attack
Peter Gutmann wrote: Stephan Neuhaus [EMAIL PROTECTED] writes: Concerning the practical use of AES, you may be right (even though it would be nice to have some advice on what one *should* do instead). Definitely. Maybe time for a BCP, not just for AES but for general block ciphers? I think so. I find it pretty alarming that in spite of all the review that AES got, [resistance to timing attacks] was not met, and in an exploitable fashion to boot. Well, it depends on what your design assumptions were. [...] In fact I'd say it's actually not possible to certify resistance to timing attacks across all possible CPUs, because it'll always be possible to find some oddball CPU for which an AES-critical instruction somewhere has some weird characteristic that helps in an attack. True, but what we have here is not some oddball CPU, but the fact that a natural AES implementation on one of the most popular CPUs in existence today has this problem. It's a problem because the algorithm (and by extension, any natural implementation of it) isn't supposed to be vulnerable to a timing attack. Lets say you want constant timing for at least the most common CPU family, x86. [...] So in the end you've got an algorithm design that happens to be resistant to timing attacks on the D0 stepping of a Northwood-core Intel P4. Anything else and all bets are off. This doesn't seem very useful to me. I don't know. That cache accesses are faster than memory accesses is not exactly new. I agree totally that we shouldn't insist on constant-time implementations across all possible architectures. This way madness lies. But the fact that it is apparently difficult to produce a fast constant-time implementation on the P4 is definitely a warning sign, especially when resistance to timing attacks was an explicit design criterion. How can we get fast constant-time implementations? (Or even just an implementation that is resistant to timing attacks, which isn't necessarily the same thing?) I don't know. But what you can't do is solicit a cipher that is supposed to be free of timing attacks and then, when one is found, say, well, don't do that then :-) I think this says more about the standardization and review process than about AES. I think the standardisation process went about as well as can be expected, given Newtonian physics-level assumptions about how CPUs work. Again, I don't know. That cache accesses are faster than memory accesses is very much inside the limits of Newtonian physics-level assumptions. If the standardizers had had a testable, implementable phrasing of their design requirements, this embarrassing mistake could have been avoided. Granted, I don't see at the moment how you could phrase this so that the word cache does not already appear somewhere, but I feel that this should have been possible. It's just good engineering practice. IIRC, the timing resistance was accepted on a theoretical argument (that table accesses take constant time); nobody actually tried it out before accepting it. If they had, they would have seen that the implementation was not constant-time. I think this is bad and I still think that the fault lies with the standardization process. Fun, Stephan begin:vcard fn:Stephan Neuhaus n:Neuhaus;Stephan org;quoted-printable:Universit=C3=A4t des Saarlandes;Department of Informatics adr;quoted-printable:;;Postfach 15 11 50;Saarbr=C3=BCcken;;66041;Germany email;internet:[EMAIL PROTECTED] title:Researcher tel;work:+49-681/302-64018 tel;fax:+49-681/302-64012 x-mozilla-html:FALSE url:http://www.st.cs.uni-sb.de/~neuhaus version:2.1 end:vcard