Re: [Cryptography] funding Tor development
On 14/10/2013 14:36, Eugen Leitl wrote: Guys, in order to minimize Tor Project's dependance on federal funding and/or increase what they can do it would be great to have some additional funding ~10 kUSD/month. I would say what is needed is not one source at $10K/month but 10K sources at $1/month. A single source of funding is *always* a single source of control. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)
On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote: The actual technical question is whether an across the board 128 bit security level is sufficient for a hash function with a 256 bit output. This weakens the proposed SHA3-256 relative to SHA256 in preimage resistance, where SHA256 is expected to provide 256 bits of preimage resistance. If you think that 256 bit hash functions (which are normally used to achieve a 128 bit security level) should guarantee 256 bits of preimage resistance, then you should oppose the plan to reduce the capacity to 256 bits. I think hash functions clearly should try to offer full (256-bit) preimage security, not dumb it down to match 128-bit birthday collision resistance. All other common hash functions have tried to do full preimage security so it will lead to design confusion, to vary an otherwise standard assumption. It will probably have bad-interactions with many existing KDF, MAC, merkle-tree designs and combined cipher+integrity modes, hashcash (partial preimage as used in bitcoin as a proof of work) that use are designed in a generic way to a hash as a building block that assume the hash has full length pre-image protection. Maybe some of those generic designs survive because they compose multiple iterations, eg HMAC, but why create the work and risk to go analyse them all, remove from implementations, or mark as safe for all hashes except SHA3 as an exception. If MD5 had 64-bit preimage, we'd be looking at preimages right now being expensive but computable. Bitcoin is pushing 60bit hashcash-sha256 preimage every 10mins (1.7petaHash/sec network hashrate). Now obviously 128-bits is another scale, but MD5 is old, broken, and there maybe partial weakenings along the way. eg say design aim of 128 slips towards 80 (in another couple of decades of computing progress). Why design in a problem for the future when we KNOW and just spent a huge thread on this list discussing that its very hard to remove upgrade algorithms from deployment. Even MD5 is still in the field. Is there a clear work-around proposed for when you do need 256? (Some composition mode or parameter tweak part of the spec?) And generally where does one go to add ones vote to the protest for not weakening the 2nd-preimage propoerty? Adam ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
* John Denker j...@av8n.com [2013-10-10 17:13 -0700]: *) Each server should publish a public key for /dev/null so that users can send cover traffic upstream to the server, without worrying that it might waste downstream bandwidth. This is crucial for deniabililty: If the rubber-hose guy accuses me of replying to ABC during the XYZ crisis, I can just shrug and say it was cover traffic. If the server deletes cover traffic, the nsa just needs to subscribe. Then the messages which you sent but which were not delivered via the list are cover traffic. Nicolas -- http://www.rachinsky.de/nicolas ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] funding Tor development
Guys, in order to minimize Tor Project's dependance on federal funding and/or increase what they can do it would be great to have some additional funding ~10 kUSD/month. If anyone is aware of anyone who can provide funding at that level or higher, please contact exec...@torproject.org ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)
Adam, I guess I should preface this by saying I am speaking only for myself. That's always true here--it's why I'm using my personal email address. But in particular, right now, I'm not *allowed* to work. But just speaking my own personal take on things We go pretty *overwhelming* feedback in this direction in the last three weeks. (For the previous several months, we got almost no feedback about it at all, despite giving presentations and posting stuff on hash forum about our plans.). But since we're shut down right now, we can't actually make any decisions or changes. This is really frustrating on all kinds of levels. Personally, I have looked at the technical arguments against the change and I don't really find any of them very convincing, for reasons I described at some length on the hash forum list, and that the Keccak designers also laid out in their post. The core of that is that an attacker who can't do 2^{128} work can't do anything at all to SHA3 with a 256 bit capacity that he couldn't also do to SHA3 with a 512 bit capacity, including finding preimages. But there's pretty much zero chance that we're going to put a standard out that most of the crypto community is uncomfortable with. The normal process for a FIPS is that we would put out a draft and get 60 or 90 days of public comments. As long as this issue is on the table, it's pretty obvious what the public comments would all be about. The place to go for current comments, if you think more are necessary, is the hash forum list. The mailing list is still working, but I think both the archives and the process of being added to the list are frozen thanks to the shutdown. I haven't looked at the hash forum since we shut down, so when we get back there will be a flood of comments there. The last I saw, the Keccak designers had their own proposal for changing what we put into the FIPS, but I don't know what people think about their proposal. --John, definitely speaking only for myself ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.
On Oct 13, 2013, at 1:04 PM, Ray Dillinger wrote: This is despite meeting (for some inscrutable definition of meeting) FIPS 140-2 Level 2 and Common Criteria standards. These standards require steps that were clearly not done here. Yet, validation certificates were issued. This is a misunderstanding of the CC certification and FIPS validation processes: the certificates were issued *under the condition* that the software/system built on it uses/implements the RNG tests mandated. The software didn't, invalidating the results of the certifications. Either way, it boils down to tests were supposed to be done or conditions were supposed to be met, and producing the darn cards with those certifications asserted amounts to stating outright that they were, and yet they were not. All you're saying here is that the certifying agencies are not the ones stating outright that the tests were done. How could they? The certification has to stop at some point; it can't trace the systems all the way to end users. What was certified as a box that would work a certain way given certain conditions. The box was used in a different way. Why is it surprising that the certification was useless? Let's consider a simple encryption box: Key goes in top, cleartext goes in left; ciphertext comes out right. There's an implicit assumption that you don't simply discard the ciphertext and send the plaintext on to the next subsystem in line. No certification can possibly check that; or that, say, you don't post all your keys on your website immediately after generating them. I can accept that, but it does not change the situation or result, except perhaps in terms of the placement of blame. I *still* hope they bill the people responsible for doing the tests on the first generation of cards for the cost of their replacement. That depends on what they were supposed to test, and whether they did test that correctly. A FIPS/Common Criteria Certification is handed a box implementing the protocol and a whole bunch of paperwork describing how it's designed, how it works internally, and how it's intended to be used. If it passes, what passes it the exact design certified, used as described. There are way too many possible system built out of certified modules for it to be reasonable to expect the certification to encompass them all. I will remark that, having been involved in one certification effort, I think they offer little, especially for software - they get at some reasonable issues for hardware designs. Still, we don't currently have much of anything better. Hundreds of eyeballs may have been on the Linux code, but we still ended up fielding a system with a completely crippled RNG and not noticing for months. Still, if you expect the impossible from a process, you make any improvement impossible. Formal verification, where possible, can be very powerful - but it will also have to focus on some well-defined subsystem, and all the effort will be wasted if the subsystem is used in a way that doesn't meet the necessary constraints. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)
On 14/10/13 17:51 PM, Adam Back wrote: On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote: The actual technical question is whether an across the board 128 bit security level is sufficient for a hash function with a 256 bit output. This weakens the proposed SHA3-256 relative to SHA256 in preimage resistance, where SHA256 is expected to provide 256 bits of preimage resistance. If you think that 256 bit hash functions (which are normally used to achieve a 128 bit security level) should guarantee 256 bits of preimage resistance, then you should oppose the plan to reduce the capacity to 256 bits. I think hash functions clearly should try to offer full (256-bit) preimage security, not dumb it down to match 128-bit birthday collision resistance. All other common hash functions have tried to do full preimage security so it will lead to design confusion, to vary an otherwise standard assumption. It will probably have bad-interactions with many existing KDF, MAC, merkle-tree designs and combined cipher+integrity modes, hashcash (partial preimage as used in bitcoin as a proof of work) that use are designed in a generic way to a hash as a building block that assume the hash has full length pre-image protection. Maybe some of those generic designs survive because they compose multiple iterations, eg HMAC, but why create the work and risk to go analyse them all, remove from implementations, or mark as safe for all hashes except SHA3 as an exception. I tend to look at it differently. There are ephemeral uses and there are long term uses. For ephemeral uses (like HMACs) then 128 bit protection is fine. For long term uses, one should not sign (hash) what the other side presents (put in a nonce) and one should always keep what is signed around (or otherwise neuter a hash failure). Etc. Either way, one wants here a bit longer protection for the long term hash. That 'time' axis is how I look at it. Simplistic or simple? Alternatively, there is the hash cryptographer's outlook, which tends to differentiate collisions, preimages, 2nd preimages and lookbacks. From my perspective the simpler statement of SHA3-256 having 128 bit protection across the board is interesting, perhaps it is OK? If MD5 had 64-bit preimage, we'd be looking at preimages right now being expensive but computable. Bitcoin is pushing 60bit hashcash-sha256 preimage every 10mins (1.7petaHash/sec network hashrate). I might be able to differentiate the preimage / collision / 2nd pi stuff here if I thought about if for a long time ... but even if I could, I would have no confidence that I'd got it right. Or, more importantly, my design gets it right in the future. And as we're dealing with money, I'd *want to get it right*. I'd actually be somewhat happier if the hash had a clear number of 128. Now obviously 128-bits is another scale, but MD5 is old, broken, and there maybe partial weakenings along the way. eg say design aim of 128 slips towards 80 (in another couple of decades of computing progress). Why design in a problem for the future when we KNOW and just spent a huge thread on this list discussing that its very hard to remove upgrade algorithms from deployment. Even MD5 is still in the field. Um. Seems like this argument only works if people drop in SHA3 without being aware of the subtle switch in preimage protection, *and* they designed for it earlier on. For my money, let 'em hang. Is there a clear work-around proposed for when you do need 256? (Some composition mode or parameter tweak part of the spec?) Use SHA3-512 or SHA3-384? What is the preimage protection of SHA3-512 when truncated to 256? It seems that SHA3-384 still gets 256. And generally where does one go to add ones vote to the protest for not weakening the 2nd-preimage propoerty? For now, refer to Congress of the USA, it's in Washington DC. Hopefully, it'll be closed soon too... iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] /dev/random is not robust
http://eprint.iacr.org/2013/338.pdf ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] /dev/random is not robust
On Tue, Oct 15, 2013 at 12:35:13AM -, d...@deadhat.com wrote: http://eprint.iacr.org/2013/338.pdf *LINUX* /dev/random is not robust, so claims the paper. I wonder how various *BSDs or the Solarish family (Illumos, Oracle Solaris) hold up under similar scrutiny? Linux is big, but it is not everything. Dan ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] /dev/random is not robust
http://eprint.iacr.org/2013/338.pdf I'll be the first to admit that I don't understand this paper. I'm just an engineer, not a mathematician. But it looks to me like the authors are academics, who create an imaginary construction method for a random number generator, then prove that /dev/random is not the same as their method, and then suggest that /dev/random be revised to use their method, and then show how much faster their method is. All in all it seems to be a pitch for their method, not a serious critique of /dev/random. They labeled one of their construction methods robustness, but it doesn't mean what you think the word means. It's defined by a mess of greek letters like this: Theorem 2. Let n m, , γ â be integers. Assume that G : {0, 1}m â {0, 1}n+ is a deterministic (t, εprg )- pseudorandom generator. Let G = (setup, refresh, next) be defined as above. Then G is a ((t , qD , qR , qS ), γ â , ε)- 2 robust PRNG with input where t â t, ε = qR (2εprg +qD εext +2ân+1 ) as long as γ â ⥠m+2 log(1/εext )+1, n ⥠m + 2 log(1/εext ) + log(qD ) + 1. Yeah, what he said! Nowhere do they seem to show that /dev/random is actually insecure. What they seem to show is that it does not meet the robustness criterion that they arbitrarily picked for their own construction. Their key test is on pages 23-24, and begins with After a state compromise, A (the adversary) knows all parameters. The comparison STARTS with the idea that the enemy has figured out all of the hidden internal state of /dev/random. Then the weakness they point out seems to be that in some cases of new, incoming randomness with mis-estimated entropy, /dev/random doesn't necessarily recover over time from having had its entire internal state somehow compromised. This is not very close to what /dev/random is not robust means in English. Nor is it close to what others might assume the paper claims, e.g. /dev/random is not safe to use. John PS: After attending a few crypto conferences, I realized that academic pressures tend to encourage people to write incomprehensible papers, apparently because if nobody reading their paper can understand it, then they look like geniuses. But when presenting at a conference, if nobody in the crowd can understand their slides, then they look like idiots. So the key to understanding somebody's incomprehensible paper is to read their slides and watch their talk, 80% of which is often explanations of the background needed to understand the gibberish notations they invented in the paper. I haven't seen either the slides or the talk relating to this paper. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] /dev/random is not robust
On 2013-10-15 10:35, d...@deadhat.com wrote: http://eprint.iacr.org/2013/338.pdf No kidding. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
On 10/11/2013 11:22 AM, Jerry Leichter wrote: 1. Brute force. No public key-stretching algorithm can help, since the attacker will brute-force the k's, computing the corresponding K's as he goes. There is a completely impractical solution for this which is applicable in a very few ridiculously constrained situations. Brute force can be countered, in very limited circumstances, by brute bandwidth. You have to use random salt sufficient to ensure that all possible decryptions of messages transmitted using the insufficient key or insecure cipher are equally valid. Unfortunately, this requirement is cumulative for *ALL* messages that you encrypt using the key, and becomes flatly impossible if the total amount of ciphertext you're trying to protect with that key is greater than a very few bits. So, if you have a codebook that allows you to transmit one of 128 pre- selected messages (7 bits each) you could use a very short key or an insecure cipher about five times, attaching (2^35)/5 bits of salt to each message, to achieve security against brute-force attacks. At that point your opponent sees all possible decryptions as equally likely with at least one possible key that gives each of the possible total combinations of decryptions (approximately; about 1/(2^k) of the total number of possible decryptions will be left out, where k is the size of your actual too-short key). The bandwidth required is utterly ridiculous, but you can get security on a few very short messages, assuming there's no identifiable pattern in your salt. Unfortunately, you cannot use this to leverage secure transmission of keys, since whatever key larger than the initial key you transmit using this scheme, once your opponent has ciphertext transmitted using the longer key, the brute-force method against the possibilities for your initial short key becomes applicable to that ciphertext. Bear ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
Without doing any key management or requiring some kind of reliable identity or memory of previous sessions, the best we can do in the inner protocol is an ephemeral Diffie-Hellman, so suppose we do this: a. Generate random a and send aG on curve P256 b. Generate random b and send bG on curve P256 c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. ... Thoughts? We should get Stev Knowles explain the skeeter and bubba TCP options. From private conversations I understand that the options where doing pretty much what you describe: use Diffie Hellman in the TCP exchange to negotiate an encryption key for the TCP session. That would actually be a very neat thing. I don't believe using TCP options would be practical today, too many firewalls would filter them. But the same results would be achieved with a zero-knowledge version of TLS. That would make session encrypted by default. Of course, any zero-knowledge protocol can be vulnerable to man-in-the-middle attacks. But the applications can protect against that with an end to end exchange. For example, if there is a shared secret, even a lowly password, the application protocol can embed verification of the zero-knowledge session key in the password verification, by combining the session key with either the challenge or the response in a basic challenge-response protocol. That would be pretty neat, zero-knowledge TLS, then use the password exchange to mutually authenticate server and client while protecting against MITM. Pretty much any site could deploy that. -- Christian Huitema ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 2013-10-11 15:48, ianG wrote: Right now we've got a TCP startup, and a TLS startup. It's pretty messy. Adding another startup inside isn't likely to gain popularity. The problem is that layering creates round trips, and as cpus get ever faster, and pipes ever fatter, round trips become a bigger an bigger problem. Legend has it that each additional round trip decreases usage of your web site by twenty percent, though I am unaware of any evidence on this. (Which was one thing that suggests a redesign of TLS -- to integrate back into IP layer and replace/augment TCP directly. Back in those days we -- they -- didn't know enough to do an integrated security protocol. But these days we do, I'd suggest, or we know enough to give it a try.) TCP provides eight bits of protocol negotiation, which results in multiple layers of protocol negotiation on top. Ideally, we should extend the protocol negotiation and do crypto negotiation at the same time. But, I would like to see some research on how evil round trips really are. I notice that bank web pages take an unholy long time to come up, probably because one secure we page loads another, and that then loads a script, etc. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] SSH small RSA public exponent
Tim Hudson t...@cryptsoft.com writes: Does anyone recollect the history behind and the implications of the (open) SSH choice of 35 as a hard-wired public exponent? /* OpenSSH versions up to 5.4 (released in 2010) hardcoded e = 35, which is both a suboptimal exponent (it's less efficient that a safer value like 257 or F4) and non-prime. The reason for this was that the original SSH used an e relatively prime to (p-1)(q-1), choosing odd (in both senses of the word) numbers 31. 33 or 35 probably ended up being chosen frequently so it was hardcoded into OpenSSH for cargo-cult reasons, finally being fixed after more than a decade to use F4. In order to use pre-5.4 OpenSSH keys that use this odd value we make a special-case exception for SSH use */ Peter. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
On 10/11/13 7:34 PM, Peter Gutmann wrote: Phillip Hallam-Baker hal...@gmail.com writes: Quick question, anyone got a good scheme for key stretching? http://lmgtfy.com/?q=hkdfl=1 Yeah, that's a weaker simplification of the method I've always advocated, stopping the hash function before the final MD-strengthing and repeating the input, only doing the MD-strengthening for the last step for each key. I used this in many of my specifications. In essence, the MD-strengthening counter is the same as the 0xnn counter they used, although longer and stronger. This assures there are no releated key attacks, as the internal chaining variables aren't exposed. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10 October 2013 17:06, John Kelsey crypto@gmail.com wrote: Just thinking out loud The administrative complexity of a cryptosystem is overwhelmingly in key management and identity management and all the rest of that stuff. So imagine that we have a widely-used inner-level protocol that can use strong crypto, but also requires no external key management. The purpose of the inner protocol is to provide a fallback layer of security, so that even an attack on the outer protocol (which is allowed to use more complicated key management) is unlikely to be able to cause an actual security problem. On the other hand, in case of a problem with the inner protocol, the outer protocol should also provide protection against everything. Without doing any key management or requiring some kind of reliable identity or memory of previous sessions, the best we can do in the inner protocol is an ephemeral Diffie-Hellman, so suppose we do this: a. Generate random a and send aG on curve P256 b. Generate random b and send bG on curve P256 c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. The point is, this is a protocol that happens *inside* the main security protocol. This happens inside TLS or whatever. An attack on TLS then leads to an attack on the whole application only if the TLS attack also lets you do man-in-the-middle attacks on the inner protocol, or if it exploits something about certificate/identity management done in the higher-level protocol. (Ideally, within the inner protcol, you do some checking of the identity using a password or shared secret or something, but that's application-level stuff the inner and outer protocols don't know about. Thoughts? AIUI, you're trying to make it so that only active attacks work on the combined protocol, whereas passive attacks might work on the outer protocol. In order to achieve this, you assume that your proposed inner protocol is not vulnerable to passive attacks (I assume the outer protocol also thinks this is true). Why should we believe the inner protocol is any better than the outer one in this respect? Particularly since you're using tainted algorithms ;-). ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 11, 2013, at 11:09 PM, James A. Donald wrote: Right now we've got a TCP startup, and a TLS startup. It's pretty messy. Adding another startup inside isn't likely to gain popularity. The problem is that layering creates round trips, and as cpus get ever faster, and pipes ever fatter, round trips become a bigger an bigger problem. Legend has it that each additional round trip decreases usage of your web site by twenty percent, though I am unaware of any evidence on this. The research is on time delays, which you could easily enough convert to round trips. The numbers are nowhere near 20%, but are significant if you have many users: http://googleresearch.blogspot.com/2009/06/speed-matters.html -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
If someone wants to try organise a pgp key signing party at the Vancouver IETF next month let me know and I can organise a room/time. That's tended not to happen since Ted and Jeff don't come along but we could re-start 'em if there's interest. S. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Plug for crypto.stackexchange.com
I've noticed quite a few questions on this list recently of the form How do I do X? What is the right cryptographic primitive for goal X? etc. I'd like to plug the following site: http://crypto.stackexchange.com/ Cryptography Stack Exchange It is an excellent place to post questions like that and get helpful answers. I encourage folks to give it a try, if they have questions like the ones I listed above. By posting there, you will not only get good answers, but those answers will also be documented in a form that's well-suited for others with the same problem to find and benefit from. I'm not trying to drive people away from this mailing list, just pointing out an additional resource that may be helpful. Or, if you're feeling helpful and community-minded, you can subscribe and help answer other people's questions there. (That site is like Stack Overflow, for those familiar with Stack Overflow, except that it is focused on cryptography. There is also a site on information security: http://security.stackexchange.com/ ) ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 12, 2013, at 6:51 AM, Ben Laurie b...@links.org wrote: ... AIUI, you're trying to make it so that only active attacks work on the combined protocol, whereas passive attacks might work on the outer protocol. In order to achieve this, you assume that your proposed inner protocol is not vulnerable to passive attacks (I assume the outer protocol also thinks this is true). Why should we believe the inner protocol is any better than the outer one in this respect? The point is, we don't know how to make protocols that really are reliably secure against future attacks. If we did, we'd just do that. My hope is that if we layer two of our best attempts at secure protocols on top of one another, then we will get security because the attacks will be hard to get through the composed protocols. So maybe my protocol (or whatever inner protocol ends up being selected) isn't secure against everything, but as long as its weaknesses are covered up by the outer protocol, we still get a secure final result. One requirement for this is that the inner protocol must not introduce new weaknesses. I think that means it must not: a. Leak information about its plaintexts in its timing, error messages, or ciphertext sizes. b. Introduce ambiguities about how the plaintext is to be decrypted that could mess up the outer protocol's authentication. I think we can accomplish (a) by not compressing the plaintext before processing it, by using crypto primitives that don't leak plaintext data in their timing, and by having the only error message that can ever be generated from the inner protocol be essentially a MAC failure or an out-of-sequence error. I think (b) is pretty easy to accomplish with standard crypto, but maybe I'm missing something. ... Particularly since you're using tainted algorithms ;-). If using AES or P256 are the weak points in the protocol, that is a big win. Right now, we aren't getting anywhere close to that. And there's no reason either AES or P256 have to be used--I'm just looking for a simple, lightweight way to get as much security as possible inside some other protocol. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
I am one of the organizers of Security BSides Delaware, otherwise known as BSidesDE. We have already discussed having a key signing party, but if there is any interest, I'd love for any of you to be there, and potentially run it. Check out bsidesdelaware.com for dates, locations, and such. It's an academic environment, and we will have several hundred people there, from college students, to business, to infosec professionals. And we're only a couple of hours from the NSA!! ;) Nov 8 and 9th, Wilmington, DE. Any interest? Joshua Marpet On Sat, Oct 12, 2013 at 8:00 AM, Stephen Farrell stephen.farr...@cs.tcd.iewrote: If someone wants to try organise a pgp key signing party at the Vancouver IETF next month let me know and I can organise a room/time. That's tended not to happen since Ted and Jeff don't come along but we could re-start 'em if there's interest. S. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography -- *Joshua A. Marpet* Managing Principal *GuardedRisk* ** *Before the Breach **and **After The Incident!* * * 1-855-23G-RISK (855-234-7475) Cell: (908) 916-7764 joshua.mar...@guardedrisk.com http://www.GuardedRisk.com ** ** *This communication (including any attachments) contains privileged and confidential information from GuardedRisk which is intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you may not read, copy, distribute, or use this information, and no privilege has been waived by your inadvertent receipt. Furthermore, you should delete this communication and / or shred the materials and any attachments and are hereby notified that any disclosure, copying, or distribution of this communication, or the taking of any action based on it, is strictly prohibited.* ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] ADMIN: Re: Iran and murder
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I think this thread has run its course and is sufficiently off topic for this list, so I am declaring it closed. Thank you Tamzen -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSWDC65/HCKu9Iqw4RAk3YAKCxoX20Ofj4FFGUDxD8x3GVgpSd2gCg38TQ iCjYvp3O1v7rnjUFil6bDrM= =WWIe -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 11, 2013, at 1:48 AM, ianG i...@iang.org wrote: ... What's your goal? I would say you could do this if the goal was ultimate security. But for most purposes this is overkill (and I'd include online banking, etc, in that). We were talking about how hard it is to solve crypto protocol problems by getting the protocol right the first time, so we don't end up with fielded stuff that's weak but can't practically be fixed. One approach I can see to this is to have multiple layers of crypto protocols that are as independent as possible in security terms. The hope is that flaws in one protocol will usually not get through the other layer, and so they won't lead to practical security flaws. Actually getting the outer protocol right the first time would be better, but we haven't had great success with that so far. Right now we've got a TCP startup, and a TLS startup. It's pretty messy. Adding another startup inside isn't likely to gain popularity. Maybe not, though I think a very lightweight version of the inner protocol adds only a few bits to the traffic used and a few AES encryptions to the workload. I suspect most applications would never notice the difference. (Even the version with the ECDH key agreement step would probably not add noticable overhead for most applications.) On the other hand, I have no idea if anyone would use this. I'm still at the level of thinking what could be done to address this problem, not how would you sell this? iang --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/10/2013 6:40 PM, grarpamp wrote: On Thu, Oct 10, 2013 at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl. Don't include any information about To receive prism-proof email, subscribe to the irrefrangible mailing list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/. Use a This is the same as NNTP, but worse in that it's not distributed. Is this not essentially alt.anonymous.messages, etc? http://ritter.vg/blog-deanonymizing_amm.html http://ritter.vg/blog-deanonymizing_amm_followup1.html ? - -- -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.20 (MingW32) iQEcBAEBAgAGBQJSV6VAAAoJEDMbeBxcUNAekEcIAIYsHOI384C4RJfNdBcpD6NR a40C4LTQOwPJV335zUWWHjc6+6ZlUwwHimk2IQebNcEflNJn55O7k3N4CS7i4qtp A9dxDxilCrSpwwwPnsso5bfrA2/PEVfux1yzCZ4lmf39xwl/y/0PyBO7DB8CMQcA YatmYtzFAWktLYZSDuMIJPnzSKuaOnEQSiOXwCCTwgSIo3QRoNP+01JprroT168e mylxsVP2R46YIIWx6uWl+oU2oflaa3/r/nLdS2OCV99uZXmu8UlJAVNq222YwELn yhvkasfkRHtE6AhK1t5y9c4dB9cz5v2hTKNFlaRVf0PyA59ZRu8EAoZnWcJCDrM= =gsqL -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Thu, Oct 10, 2013 at 03:54:26PM -0400, John Kelsey wrote: Having a public bulletin board of posted emails, plus a protocol for anonymously finding the ones your key can decrypt, seems like a pretty decent architecture for prism-proof email. The tricky bit of crypto is in making access to the bulletin board both efficient and private. This is what Bitmessage attempts to achieve, but it has issues. Assuming these can be solved (a rather large if), and glue like https://bitmessage.ch/ is available to be run by end users it could be quite useful. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
On Thu, Oct 10, 2013 at 04:24:19PM -0700, Glenn Willen wrote: I am going to be interested to hear what the rest of the list says about this, because this definitely contradicts what has been presented to me as 'standard practice' for PGP use -- verifying identity using government issued ID, and completely ignoring personal knowledge. This obviously ignores the threat model of official fake IDs. This is not just academic for some users. Plus, if you're e.g. linking up with known friends in RetroShare (which implements identities via PGP keys, and degrees of trust (none/marginal/full) by signatures, and allows you to tune your co-operative variables (Anonymous routing/discovery/ forums/channels/use a direct source, if available) depending on the degree of trust. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/10/13 19:06 PM, John Kelsey wrote: Just thinking out loud The administrative complexity of a cryptosystem is overwhelmingly in key management and identity management and all the rest of that stuff. So imagine that we have a widely-used inner-level protocol that can use strong crypto, but also requires no external key management. The purpose of the inner protocol is to provide a fallback layer of security, so that even an attack on the outer protocol (which is allowed to use more complicated key management) is unlikely to be able to cause an actual security problem. On the other hand, in case of a problem with the inner protocol, the outer protocol should also provide protection against everything. Without doing any key management or requiring some kind of reliable identity or memory of previous sessions, the best we can do in the inner protocol is an ephemeral Diffie-Hellman, so suppose we do this: a. Generate random a and send aG on curve P256 b. Generate random b and send bG on curve P256 c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. The point is, this is a protocol that happens *inside* the main security protocol. This happens inside TLS or whatever. An attack on TLS then leads to an attack on the whole application only if the TLS attack also lets you do man-in-the-middle attacks on the inner protocol, or if it exploits something about certificate/identity management done in the higher-level protocol. (Ideally, within the inner protcol, you do some checking of the identity using a password or shared secret or something, but that's application-level stuff the inner and outer protocols don't know about. Thoughts? What's your goal? I would say you could do this if the goal was ultimate security. But for most purposes this is overkill (and I'd include online banking, etc, in that). Right now we've got a TCP startup, and a TLS startup. It's pretty messy. Adding another startup inside isn't likely to gain popularity. (Which was one thing that suggests a redesign of TLS -- to integrate back into IP layer and replace/augment TCP directly. Back in those days we -- they -- didn't know enough to do an integrated security protocol. But these days we do, I'd suggest, or we know enough to give it a try.) iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
Reply to various, Yes, the value in a given key signing is weak, in fact every link in the web of trust is terribly weak. However, if you notarize and publish the links in CT fashion then I can show that they actually become very strong. I might not have good evidence of John Gilmore's key at RSA 2001, but I could get very strong evidence that someone signed a JG key at RSA 2001. Which is actually quite a high bar since the attacker would haver to buy a badge which is $2,000. Even if they were going to go anyway and it is a sunk cost, they are rate limited. The other attacks John raised are valid but I think they can be dealt with by adequate design of the ceremony to ensure that it is transparent. Now stack that information alongside other endorsements and we can arrive at a pretty strong authentication mechanism. The various mechanisms used to evaluate the trust can also be expressed in the endorsement links. What I am trying to solve here is the distance problem in Web o' trust. At the moment it is pretty well impossible for me to have confidence in keys for people who are ten degrees out. Yet I am pretty confident of the accuracy of histories of what happened 300 years ago (within certain limits). It is pretty easy to fake a web of trust, I can do it on one computer, no trouble. But if the web is grounded at just a few points to actual events then it becomes very difficult to spoof. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Key stretching
All, Quick question, anyone got a good scheme for key stretching? I have this scheme for managing private keys that involves storing them as encrypted PKCS#8 blobs in the cloud. AES128 seems a little on the weak side for this but there are (rare) circumstances where a user is going to need to type in the key for recovery purposes so I don't want more than 128 bits of key to type in (I am betting that 128 bits is going to be sufficient to the end of Moore's law). So the answer is to use AES 256 and stretch the key, but how? I could just repeat the key: K = k + k Related key attacks make me a little nervous though. Maybe: K = (k + 01234567) XOR SHA512 (k) -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
On 2013-10-10 (283), at 19:24:19, Glenn Willen gwil...@nerdnet.org wrote: John, On Oct 10, 2013, at 2:31 PM, John Gilmore wrote: An important user experience point is that we should be teaching GPG users to only sign the keys of people who they personally know. [] would be false and would undermine the strength of the web of trust. I am going to be interested to hear what the rest of the list says about this, because this definitely contradicts what has been presented to me as 'standard practice' for PGP use -- verifying identity using government issued ID, and completely ignoring personal knowledge. Do you have any insight into what proportion of PGP/GPG users mean their signatures as personal knowledge (my preference and evidently yours), versus government ID (my perception of the community standard best practice), versus no verification in particular (my perception of the actual common practice in many cases)? (In my ideal world, we'd have a machine readable way of indication what sort of verification was performed. Signing policies, not being machine readable or widely used, don't cover this well. There is space for key-value annotations in signature packets, which could help with this if we standardized on some.) Glenn Willen __ Surely to make it two factor it needs to be someone you know _and_ something they have? :-) __outer ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
grarpamp wrote: On Thu, Oct 10, 2013 at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl. Don't include any information about To receive prism-proof email, subscribe to the irrefrangible mailing list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/. Use a This is the same as NNTP, but worse in that it's not distributed. This scheme already exists on Usenet/NNTP as alt.anonymous.messages. See the Google groups view here: https://groups.google.com/forum/#!forum/alt.anonymous.messages Erik -- -- Erik de Castro Lopo http://www.mega-nerd.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/10/13 08:41 AM, Bill Frantz wrote: We should try to characterize what a very long time is in years. :-) Look at the produce life cycle for known crypto products. We have some experience of this now. Skype, SSL v2/3 - TLS 0/1/2, SSH 1 - 2, PGP 2 - 5+. As a starting point, I would suggest 10 years. iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/10/13 17:58 PM, Salz, Rich wrote: TLS was designed to support multiple ciphersuites. Unfortunately this opened the door to downgrade attacks, and transitioning to protocol versions that wouldn't do this was nontrivial. The ciphersuites included all shared certain misfeatures, leading to the current situation. On the other hand, negotiation let us deploy it in places where full-strength cryptography is/was regulated. That same regulator that asked for that capability is somewhat prominent in the current debacle. Feature or bug? Sometimes half a loaf is better than nothing. A shortage of bread has been the inspiration for a few revolutions :) iang ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Thu, Oct 10, 2013 at 04:22:50PM -0400, Jerry Leichter wrote: On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote: Very silly but trivial to implement so I went ahead and did so: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl Nice! I like it. Me too. I've been telling people that all PRISM will accomplish regarding the bad guys is to get them to use dead drops, such as comment posting on any of millions of blogs -- low bandwidth, undetectable. The technique in this thread makes the use of a dead drop obvious, and adds significantly to the recipient's work load, but in exchange brings the bandwidth up to more usable levels. Either way the communicating peers must pre-agree a number of things -- a traffic analysis achilles point, but it's one-time vulnerability, and chances are people who would communicate this way already have such meetings. A couple of comments: 1. Obviously, this has scaling problems. The interesting question is how to extend it while retaining the good properties. If participants are willing to be identified to within 1/k of all the users of the system (a set which will itself remain hidden by the system), choosing one of k servers based on a hash of the recipient would work. (A concerned recipient could, of course, check servers that he knows can't possibly have his mail.) Can one do better? Each server/list is a channel. Pre-agree on channels or use hashes. If the latter then the hashes have to be of {sender, recipient}, else one party has a lot of work to do, but then again, using just the sender or just the recipient helps protect the other party against traffic analysis. Assuming there are millions of channels then maybe something like H({sender, truncate(H(recipient), log2(number-of-channels-to check))}) will do just fine. And truncate(H(recipient, log2(num-channels))) can be used for introduction purposes. The number of servers/lists divides the total work to do to receive a message. 2. The system provides complete security for recipients (all you can tell about a recipient is that he can potentially receive messages - though the design has to be careful so that a recipient doesn't, for example, release timing information depending on whether his decryption succeeded or not). However, the protection is more limited for senders. A sender can hide its activity by simply sending random messages, which of course no one will ever be able to decrypt. Of course, that adds yet more load to the entire system. But then the sender can't quite prove that they didn't send anything. In a rubber hose attack this could be a problem. This also applies to recipients: they can be observed fetching messages, and they can be observed expending power trying to find ones addressed to them. Also, there's no DoS protection: flooding the lists with bogus messages is a DoS on recipients. Nico -- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
I like the ideas, John. The idea, and the protocol you sketched out, are a little reminiscent of ZRTP ¹ and of tcpcrypt ². I think you can go one step further, however, and make it *really* strong, which is to offer the higher or outer layer a way to hook into the crypto from your inner layer. This could be by the inner layer exporting a crypto value which the outer layer enforces an authorization or authenticity requirement on, as is done in ZRTP if the a=zrtp-hash is delivered through an integrity-protected outer layer, or in tcpcrypt if the Session ID is verified by the outer layer. I think this is a case where a separation of concerns between layers with a simple interface between them can have great payoff. The lower/inner layer enforces confidentiality (encryption), integrity, hopefully forward-secrecy, etc., and the outer layer decides on policy: authorization, naming (which is often but not necessarily used for authorization), etc. The interface between them can be a simple cryptographic interface, for example the way it is done in the two examples above. I think the way that SSL combined transport layer security, authorization, and identification was a terrible idea. I (and others) have been saying all along that it was a bad idea, and I hope that the related security disasters during the last two years have started persuading more people to rethink it, too. I guess the designers of SSL were simply following the lead of the original inventors of public key cryptography, who delegated certain critical unsolved problems to an underspecified Trusted Third Party. What a colossal, historic mistake. The foolscap project ³ by Brian Warner demonstrates that it is possible to retrofit a nice abstraction layer onto SSL. The way that it does this is that each server automatically creates a self-signed certificate, the secure hash of that certificate is embedded into the identifier pointing at that server, and the client requires the server's public key match the certificate matching that hash. The fact that this is a useful thing to do, and inconvenient and rare thing to do with SSL, should give security architects food for thought. So I have a few suggestions for you: 1. Go, go, go! The path your thoughts are taking seems fruitful. Just design a really good inner layer of crypto, without worrying (for now) about the vexing and subtle problems of authorization, authentication, naming, Man-In-The-Middle-Attack and so on. For now. 2. Okay, but leave yourself an out, by defining a nice simple cryptographic hook by which someone else who *has* solved those vexing problems could extend the protection that they've gained to users of your protocol. 3. Maybe study ZRTP and tcpcrypt for comparison. Don't try to study foolscap, even though it is a very interesting practical approach, because there doesn't exist documentation of the protocol at the right level for you to learn from. Regards, Zooko https://LeastAuthority.com ← verifiably end-to-end-encrypted storage P.S. Another example that you and I should probably study is cjdns ⁴. Despite its name, it is *not* a DNS-like thing. It is a transport-layer thing. I know less about cjdns so I didn't cite it as a good example above. ¹ https://en.wikipedia.org/wiki/ZRTP ² http://tcpcrypt.org/ ³ http://foolscap.lothar.com/docs/using-foolscap.html ⁴ http://cjdns.info/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
On 10 October 2013 22:31, John Gilmore g...@toad.com wrote: Does PGP have any particular support for key signing parties built in or is this just something that has grown up as a practice of use? It's just a practice. I agree that building a small amount of automation for key signing parties would improve the web of trust. Do key signing parties even happen much anymore? The last time I saw one advertised was around PGP 2.6! I am specifically thinking of ways that key signing parties might be made scalable so that it was possible for hundreds of thousands of people... An important user experience point is that we should be teaching GPG users to only sign the keys of people who they personally know. Having a signature that says, This person attended the RSA conference in October 2013 is not particularly useful. (Such a signature could be generated by the conference organizers themselves, if they wanted to.) Since the conference organizers -- and most other attendees -- don't know what an attendee's real identity is, their signature on that identity is worthless anyway. I can sign the public keys of people I personally know without a key signing party. :-) For many purposes I don't care about a person's official, legal identity, but I do want to communicate with a particular persona. For instance at DefCon or CCC I neither know or care whether someone identifies themselves to me by their legal name or hacker handle, but it is very useful to know authenticate that they are in control of a private PGP/GPG key in that name on a particular date. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.
Saw this on Arstechnica today and thought I'd pass along the link. http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/ More detailed version of the story available at: https://factorable.net/paper.html Short version: Taiwanese Government issued smartcards to citizens. Each has a 1024 bit RSA key. The keys were created using a borked RNG. It turns out many of the keys are broken, easily factored, or have factors in common, and up to 0.4% of these cards in fact provide no encryption whatsoever (RSA keys are flat out invalid, and there is a fallback to unencrypted operation). This is despite meeting (for some inscrutable definition of meeting) FIPS 140-2 Level 2 and Common Criteria standards. These standards require steps that were clearly not done here. Yet, validation certificates were issued. Taiwan is now in the process of issuing a new generation of smartcards; I hope they send the clowns who were supposed to test the first generation a bill for that. Bear ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
This is a job for a key derivation function or a cryptographic prng. I would use CTR-DRBG from 800-90 with AES256. Or the extract-then-expand KDF based on HMAC-SHA512. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/11/13 at 10:32 AM, zoo...@gmail.com (Zooko O'Whielacronx) wrote: Don't try to study foolscap, even though it is a very interesting practical approach, because there doesn't exist documentation of the protocol at the right level for you to learn from. Look at the E language sturdy refs, which are a lot like the Foolscap references. They are documented at www.erights.org. Cheers - Bill --- Bill Frantz| Truth and love must prevail | Periwinkle (408)356-8506 | over lies and hate. | 16345 Englewood Ave www.pwpconsult.com | - Vaclav Havel | Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
On 2013-10-11, at 07:03, Tony Naggs tonyna...@gmail.com wrote: On 10 October 2013 22:31, John Gilmore g...@toad.com wrote: Does PGP have any particular support for key signing parties built in or is this just something that has grown up as a practice of use? It's just a practice. I agree that building a small amount of automation for key signing parties would improve the web of trust. Do key signing parties even happen much anymore? The last time I saw one advertised was around PGP 2.6! The most recent key signing party I attended was five days ago (DNS-OARC meeting in Phoenix, AZ). I commonly have half a dozen opportunities to participate in key signing parties during a typical year's travel schedule to workshops, conferences and other meetings. This is not uncommon in the circles I work in (netops, dnsops). My habit before signing anything is generally at least to have had a conversation with someone, observed their interactions with people I do know (I generally have worked with other people at the party). I'll check government-issued IDs, but I'm aware that I am not an expert in counterfeit passports and I never feel like that I am able to do a good job at it. (I showed up to a key signing party at the IETF once with a New Zealand passport, a Canadian passport, a British passport, an expired Canadian permanent-resident card, three driving licences and a Canadian health card, and offered the bundle to anybody who cared to review them to make this easier for others. But that was mainly showing off.) I have used key ceremonies to poison edges and nodes in the graph of trust following observations that particular individuals don't do a good enough job of this, or that (in some cases) they appear to have made signatures at an event where I was present and I know they were not. That's a useful adjunct to a key ceremony (I think) that many people ignore. The web of trust can also be a useful web of distrust. Joe ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
On Oct 11, 2013, at 11:26 AM, Phillip Hallam-Baker hal...@gmail.com wrote: Quick question, anyone got a good scheme for key stretching? I have this scheme for managing private keys that involves storing them as encrypted PKCS#8 blobs in the cloud. AES128 seems a little on the weak side for this but there are (rare) circumstances where a user is going to need to type in the key for recovery purposes so I don't want more than 128 bits of key to type in (I am betting that 128 bits is going to be sufficient to the end of Moore's law). So the answer is to use AES 256 and stretch the key, but how? I could just repeat the key: K = k + k Related key attacks make me a little nervous though. Maybe: The related key attacks out there require keys that differ in a couple of bits. If k and k' aren't related, k+k and k'+k' won't be either. K = (k + 01234567) XOR SHA512 (k) Let's step back a moment and think about attacks: 1. Brute force. No public key-stretching algorithm can help, since the attacker will brute-force the k's, computing the corresponding K's as he goes. 2. Analytic attack against AES128 that doesn't extend, in general, to AES256. Without knowing the nature of the attack, it's impossible to estimate whether knowing that the key has some particular form would allow the attack to extend. If so ... what forms? 3. Analytic attack against AES256. A recognizable form for keys - e.g., k+k - might conceivably help, but it seems like a minor thing. Realistically, k+k, or k padded with 0's, or SHA256(k), are probably equally strong except under any attacks specifically concocted to target them (e.g., suppose it turns out that there just happens to be an analytic attack against AES256 for keys with more than 3/4's of the bits equal to 0). Since you're describing a situation in which performance is not an issue, you might as well use SHA256(k) - whitening the key can't hurt. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.
Dear Ray, On 2013-10-11, at 19:38 , Ray Dillinger b...@sonic.net wrote: This is despite meeting (for some inscrutable definition of meeting) FIPS 140-2 Level 2 and Common Criteria standards. These standards require steps that were clearly not done here. Yet, validation certificates were issued. This is a misunderstanding of the CC certification and FIPS validation processes: the certificates were issued *under the condition* that the software/system built on it uses/implements the RNG tests mandated. The software didn't, invalidating the results of the certifications. At best the mandatory guidance is there because it was too difficult to prove that the smart card meets the criteria without it (typical example in the OS world: the administrator is assumed to be trusted, the typical example in smart card hardware: do the RNG tests!). At worst the mandatory guidance is there because without it, the smart card would not have met the criteria (i.e. without following the guidance there is a vulnerability) This is an example of the latter case. Most likely the software also hasn't implement the other requirements, leaving it somewhat to very vulnerable to the standard smart card attack such as side channel analysis and perturbation. If the total (the smart card + software) would have been CC certified, this would have been checked as part of the composite certification. (I've been in the smart card CC world for more than a decade. This kind of misunderstanding/misapplication is rare for the financial world thanks to EMVco, i.e. the credit card companies. It is also rare for European government organisations, as they know to contact the Dutch/French/German/UK agencies involved in these things. European ePassports for example are generally certified for the whole thing and a mistake in those of this order would be ... surprising and cause for some intense discussion in the smart card certification community. Newer parties into the smart card world tend to have to relearn the lessons again and again it seems.) With kind regards, Wouter Slegers ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
On 2013-10-11 12:03:44 +0100 (+0100), Tony Naggs wrote: Do key signing parties even happen much anymore? The last time I saw one advertised was around PGP 2.6! [...] Within more active pockets of the global free software community (where OpenPGP signatures are used to authenticate release artifacts, security advisories, election ballots, access controls and so on) key signing parties are an extremely common occurrence... I'd say much more so now than a decade ago, as the community has grown continually and developed an increasing need to be able to recognize one another's output in a verifiable manner, asynchronously, distributed over great distances and across loosely-related subcommunities/projects. -- { PGP( 48F9961143495829 ); FINGER( fu...@cthulhu.yuggoth.org ); WWW( http://fungi.yuggoth.org/ ); IRC( fu...@irc.yuggoth.org#ccl ); WHOIS( STANL3-ARIN ); MUD( kin...@katarsis.mudpy.org:6669 ); } ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Fri, Oct 11, 2013 at 10:32 AM, Zooko O'Whielacronx zoo...@gmail.com wrote: I like the ideas, John. The idea, and the protocol you sketched out, are a little reminiscent of ZRTP ¹ and of tcpcrypt ². I think you can go one step further, however, and make it *really* strong, which is to offer the higher or outer layer a way to hook into the crypto from your inner layer. This could be by the inner layer exporting a crypto value which the outer layer enforces an authorization or authenticity requirement on, as is done in ZRTP if the a=zrtp-hash is delivered through an integrity-protected outer layer, or in tcpcrypt if the Session ID is verified by the outer layer. Hi Zooko, Are you and John talking about the same thing? John's talking about tunnelling a redundant inner record layer of encryption inside an outer record layer (using TLS terminology). I think you're talking about a couple different-but-related things: * channel binding, where an unauthenticated-but-encrypted channel can be authenticated by performing an inside-the-channel authentication which commits to values uniquely identifying the outer channel (note that the inner vs outer distinction has flipped around here!) * out-of-band verification, where a channel is authenticated by communicating values identifying the channel (fingerprint, SAS, sessionIDs) over some other, authenticated channel (e.g. ZRTP's use of the signalling channel to protect the media channel). So I think you're focusing on *modularity* between authentication methods and the record layer, whereas I think John's getting at *redundancy*. I think the way that SSL combined transport layer security, authorization, and identification was a terrible idea. I (and others) have been saying all along that it was a bad idea, and I hope that the related security disasters during the last two years have started persuading more people to rethink it, too. This seems like a different thing again. I agree that TLS could have been more modular wrt key agreement and public-key authentication. It would be nice if the keys necessary to compute a TLS handshake were part of TLS, instead of requiring X.509 certs. This would avoid self-signed certs, and would allow the client to request various proofs for the server's public key, which could be X.509, other cert formats, or other info (CT, TACK, DNSSEC, revocation data, etc.). But this seems like a minor layering flaw, I'm not sure it should be blamed for any TLS security problems. The problems with chaining CBC IVs, plaintext compression, authenticate-then-encrypt, renegotiation, and a non-working upgrade path aren't solved by better modularity, nor are they solved by redundancy. They're solved by making better choices. I guess the designers of SSL were simply following the lead of the original inventors of public key cryptography, who delegated certain critical unsolved problems to an underspecified Trusted Third Party. What a colossal, historic mistake. If you're talking about the New Directions paper, Diffie and Hellman talk about a public file. Certificates were a later idea, due to Kohnfelder... I'd argue that's where things went wrong... 1. Go, go, go! The path your thoughts are taking seems fruitful. Just design a really good inner layer of crypto, without worrying (for now) about the vexing and subtle problems of authorization, authentication, naming, Man-In-The-Middle-Attack and so on. For now. That's easy though, right? Use a proper KDF from a shared secret, do authenticated encryption, don't f*ck up the IVs The worthwhile problems are the hard ones, no? :-) Trevor ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Key stretching
Phillip Hallam-Baker hal...@gmail.com writes: Quick question, anyone got a good scheme for key stretching? http://lmgtfy.com/?q=hkdfl=1 Peter :-). ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] was this FIPS 186-1 (first DSA) an attemped NSA backdoor?
Some may remember Bleichenbacher found a random number generator bias in the original DSA spec, that could leak the key after soem number of signatures depending the circumstances. Its described in this summary of DSA issues by Vaudenay Evaluation Report on DSA http://www.ipa.go.jp/security/enc/CRYPTREC/fy15/doc/1002_reportDSA.pdf Bleichenbacher's attack is described in section 5. The conclusion is Bleichenbacher estimates that the attack would be practical for a non-negligible fraction of qs with a time complexity of 2^63, a space complexity of 2^40, and a collection of 2^22 signatures. We believe the attack can still be made more efficient. NIST reacted by issuing special publication SP 800-xx to address and I presume that was folded into fips 186-3. Of course NIST is down due to the USG political level stupidity (why take the extra work to switch off the web server on the way out I dont know). That means 186-1 and 186-2 were vulnerable. An even older NSA sabotage spotted by Bleichenbacher? Anyway it highlights the significant design fragility in DSA/ECDSA not just in the entropy of the secret key, but in the generation of each and every k value, which leads to the better (but non-NIST recommended) idea adopted by various libraries and applied crypto people to use k=H(m,d) so that the signture is determinstic in fact, and the same k value will only be used with the same message (which is harmless as thts just reissuing the bitwise same signature). What happens if a VM is rolled back including the RNG and it outputs the same k value to a different network dependeng m value? etc. Its just unnecessarily fragile in its NIST/NSA mandated form. Adam ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
The problem with offensive cyberwarfare is that, given the imbalance between attackers and defenders and the expanding use of computer controls in all sorts of systems, a cyber war between two advanced countries will not decide anything militarily, but will leave both combattants much poorer than they were previously, cause some death and a lot of hardship and bitterness, and leave the actual hot war to be fought. Imagine a conflict that starts with both countries wrecking a lot of each others' infrastructure--causing refineries to burn, factories to wreck expensive equipment, nuclear plants to melt down, etc. A week later, that phase of the war is over. Both countries are, at that point, probalby 10-20% poorer than they were a week earlier. Both countries have lots of really bitter people out for blood, because someone they care about was killed or their job's gone and their house burned down or whatever. But probably there's been little actual degradation of their standard war-fighting ability. Their civilian aviation system may be shut down, some planes may even have been crashed, but their bombers and fighters and missiles are mostly still working. Fuel and spare parts may be hard to come by, but the military will certainly get first pick. My guess is that what comes next is that the two countries have a standard hot war, but with the pleasant addition of a great depression sized economic collapse for both right in the middle of it. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
Just thinking out loud The administrative complexity of a cryptosystem is overwhelmingly in key management and identity management and all the rest of that stuff. So imagine that we have a widely-used inner-level protocol that can use strong crypto, but also requires no external key management. The purpose of the inner protocol is to provide a fallback layer of security, so that even an attack on the outer protocol (which is allowed to use more complicated key management) is unlikely to be able to cause an actual security problem. On the other hand, in case of a problem with the inner protocol, the outer protocol should also provide protection against everything. Without doing any key management or requiring some kind of reliable identity or memory of previous sessions, the best we can do in the inner protocol is an ephemeral Diffie-Hellman, so suppose we do this: a. Generate random a and send aG on curve P256 b. Generate random b and send bG on curve P256 c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. The point is, this is a protocol that happens *inside* the main security protocol. This happens inside TLS or whatever. An attack on TLS then leads to an attack on the whole application only if the TLS attack also lets you do man-in-the-middle attacks on the inner protocol, or if it exploits something about certificate/identity management done in the higher-level protocol. (Ideally, within the inner protcol, you do some checking of the identity using a password or shared secret or something, but that's application-level stuff the inner and outer protocols don't know about. Thoughts? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/9/13 at 7:18 PM, crypto@gmail.com (John Kelsey) wrote: We know how to address one part of this problem--choose only algorithms whose design strength is large enough that there's not some relatively close by time when the algorithms will need to be swapped out. That's not all that big a problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in the far future, your users need only fear cryptanalysis, not Moore's Law. Really, even with 128-bit security level primitives, it will be a very long time until the brute-force attacks are a concern. We should try to characterize what a very long time is in years. :-) This is actually one thing we're kind-of on the road to doing right in standards now--we're moving away from barely-strong-enough crypto and toward crypto that's going to be strong for a long time to come. We had barely-strong-enough crypto because we couldn't afford the computation time for longer key sizes. I hope things are better now, although there may still be a problem for certain devices. Let's hope they are only needed in low security/low value applications. Protocol attacks are harder, because while we can choose a key length, modulus size, or sponge capacity to support a known security level, it's not so easy to make sure that a protocol doesn't have some kind of attack in it. I think we've learned a lot about what can go wrong with protocols, and we can design them to be more ironclad than in the past, but we still can't guarantee we won't need to upgrade. But I think this is an area that would be interesting to explore--what would need to happen in order to get more ironclad protocols? A couple random thoughts: I fully agree that this is a valuable area to research. a. Layering secure protocols on top of one another might provide some redundancy, so that a flaw in one didn't undermine the security of the whole system. Defense in depth has been useful from longer ago than the Trojans and Greeks. b. There are some principles we can apply that will make protocols harder to attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is allowed to need change its execution path or timing based on the key or plaintext, every message includes a sequence number and the hash of the previous message, etc. This won't eliminate protocol attacks, but will make them less common. I think that the attacks on MAC-then-encrypt and timing attacks were first described within the last 15 years. I think it is only normal paranoia to think there may be some more equally interesting discoveries in the future. c. We could try to treat at least some kinds of protocols more like crypto algorithms, and expect to have them widely vetted before use. Most definitely! Lots of eye. Formal proofs because they are a completely different way of looking at things. Simplicity. All will help. What else? ... Perhaps the shortest limit on the lifetime of an embedded system is the security protocol, and not the hardware. If so, how do we as society deal with this limit. What we really need is some way to enforce protocol upgrades over time. Ideally, there would be some notion that if you support version X of the protocol, this meant that you would not support any version lower than, say, X-2. But I'm not sure how practical that is. This is the direction I'm pushing today. If you look at auto racing you will notice that the safety equipment commonly used before WW2 is no longer permitted. It is patently unsafe. We need to make the same judgements in high security/high risk applications. Cheers - Bill --- Bill Frantz|The nice thing about standards| Periwinkle (408)356-8506 |is there are so many to choose| 16345 Englewood Ave www.pwpconsult.com |from. - Andrew Tanenbaum| Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
2013/10/9 Phillip Hallam-Baker hal...@gmail.com I see cyber-sabotage as being similar to use of chemical or biological weapons: It is going to be banned because the military consequences fall far short of being decisive, are unpredictable and the barriers to entry are low. I doubt that's anywhere near how they'll be treated. Bio en Chem are banned for their extreme relative effectiveness and far greater cruelty than most weapons have. Bleeding out is apparently considered quite human, compared to chocking on foamed up parts of your own lungs. Cyberwarfare will likely be effectively counteracted by better security. The more I think the less I understand fall far short of being decisive. If cyber is out you switch to old-school tactics. If chemical or biological happens it's either death for hundreds or thousands or nothing happens. Of course the bigger armies will want to keep it away from the terrorists, it'd level the playing field quite a bit. A 200 losses, 2000 kills battle could turn into 1200 losses, 1700 kills quite fast. But that's not what I'd call a ban. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Elliptic curve question
2013/10/10 Phillip Hallam-Baker hal...@gmail.com The original author was proposing to use the same key for encryption and signature which is a rather bad idea. Explain why, please. It might expand the attack surface, that's true. You could always add a signed message that says I used a key named 'Z' for encryption here. Would that solve the problem? ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/9/13 at 7:12 PM, watsonbl...@gmail.com (Watson Ladd) wrote: On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote: ... As professionals, we have an obligation to share our knowledge of the limits of our technology with the people who are depending on it. We know that all crypto standards which are 15 years old or older are obsolete, not recommended for current use, or outright dangerous. We don't know of any way to avoid this problem in the future. 15 years ago is 1997. Diffie-Hellman is much, much older and still works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988, Schnorr signature 1989. When I developed the VatTP crypto protocol for the E language www.erights.org about 15 years ago, key sizes of 1024 bits were high security. Now they are seriously questioned. 3DES was state of the art. No widely distributed protocols used Feige-Fiat-Shamir or Schnorr signatures. Do any now? I stand by my statement. I think the burden of proof is on the people who suggest that we only have to do it right the next time and things will be perfect. These proofs should address: New applications of old attacks. The fact that new attacks continue to be discovered. The existence of powerful actors subverting standards. The lack of a did right example to point to. ... long post of problems with TLS, most of which are valid criticisms deleted as not addressing the above questions. Protocols involving crypto need to be so damn simple that if it connects correctly, the chance of a bug is vanishingly small. If we make a simple protocol, with automated analysis of its security, the only danger is a primitive failing, in which case we are in trouble anyway. I agree with this general direction, but I still don't have the warm fuzzies that good answers to the above questions might give. I have seen too many projects to do it right that didn't pull it off. See also my response to John Kelsey. Cheers - Bill --- Bill Frantz| Privacy is dead, get over| Periwinkle (408)356-8506 | it. | 16345 Englewood Ave www.pwpconsult.com | - Scott McNealy | Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
Watson Ladd watsonbl...@gmail.com writes: The obvious solution: Do it right the first time. And how do you know that you're doing it right? PGP in 1992 adopted a bleeding-edge cipher (IDEA) and was incredibly lucky that it's stayed secure since then. What new cipher introduced up until 1992 has had that distinction? Doing it right the first time is a bit like the concept of stopping rules in heuristic decision-making, if they were that easy then people wouldn't be reading this list but would be in Las Vegas applying the stopping rule stop playing just before you start losing. This is particularly hard in standards-based work because any decision about security design tends to rapidly degenerate into an argument about whose fashion statement takes priority. To get back to an earlier example that I gave on the list, the trivial and obvious fix to TLS of switching from MAC- then-encrypt to encrypt-then-MAC is still being blocked by the WG chairs after nearly a year, despite the fact that a straw poll on the list indicated general support for it (rough consensus) and implementations supporting it are already deployed (running code). So do it right the first time is a lot easier said than done. Peter. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
TLS was designed to support multiple ciphersuites. Unfortunately this opened the door to downgrade attacks, and transitioning to protocol versions that wouldn't do this was nontrivial. The ciphersuites included all shared certain misfeatures, leading to the current situation. On the other hand, negotiation let us deploy it in places where full-strength cryptography is/was regulated. Sometimes half a loaf is better than nothing. /r$ -- Principal Security Engineer Akamai Technology Cambridge, MA ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] prism-proof email in the degenerate case
Very silly but trivial to implement so I went ahead and did so: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl. Don't include any information about the recipient, just send the ciphertext (in some form of ascii armor). Be sure to include something in the message itself to indicate who it's from because no sender information will be retained. To receive prism-proof email, subscribe to the irrefrangible mailing list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/. Use a separate email address for which you can pipe all incoming messages through a script. Upon receipt of a message, have your script attempt to decrypt it. If decryption succeeds (almost never), put it in your inbox. If decryption fails (almost always), put it in the bit bucket. (If you prefer not to subscribe you can instead download messages from the public list archive, but at some point I may discard archived messages and/or stop archiving.) The simple(-minded) idea is that everybody receives everybody's email, but can only read their own. Since everybody gets everything, the metadata is uninteresting and traffic analysis is largely fruitless. Spam isn't an issue because it will be discarded along with all the other mail that fails to decrypt for the recipient. Each group of correspondents can choose its own methods of encryption and key exchange. Scripts interfacing to, e.g., gpg on either end should be straightforward. Enjoy! /tongue-in-cheek ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
2013/10/10 John Kelsey crypto@gmail.com The problem with offensive cyberwarfare is that, given the imbalance between attackers and defenders and the expanding use of computer controls in all sorts of systems, a cyber war between two advanced countries will not decide anything militarily, but will leave both combattants much poorer than they were previously, cause some death and a lot of hardship and bitterness, and leave the actual hot war to be fought. I think you'd only employ most the offensive means in harmony with the start of the hot war. That makes a lot more sense than annoying your opponent. Imagine a conflict that starts with both countries wrecking a lot of each others' infrastructure--causing refineries to burn, factories to wreck expensive equipment, nuclear plants to melt down, etc. A week later, that phase of the war is over. Both countries are, at that point, probalby 10-20% poorer than they were a week earlier. I think this would cause more than 20% damage (esp. the nuclear reactor!). But I can imagine a slow buildup of disabled things happening. Both countries have lots of really bitter people out for blood, because someone they care about was killed or their job's gone and their house burned down or whatever. But probably there's been little actual degradation of their standard war-fighting ability. Their civilian aviation system may be shut down, some planes may even have been crashed, but their bombers and fighters and missiles are mostly still working. Fuel and spare parts may be hard to come by, but the military will certainly get first pick. My guess is that what comes next is that the two countries have a standard hot war, but with the pleasant addition of a great depression sized economic collapse for both right in the middle of it. This would be a mayor plus in the eyes of the countries' leaders. Motivating people for war is the hardest thing about it. I do think the military relies heavily on electronic tools for coordination. And I think they have plenty of parts stockpiled for a proper blitzkrieg. Most the things you mentioned can be achieved with infiltration and covert operations, which are far more traditional. And far harder to do at great scale. But they are not done until there is already a significant blood thirst. I'm not sure what'd happen, simply put. But I think it'll become just another aspect of warfare. It is already another aspect of the cover operations, and we haven't lived a high-tech vs high-tech war. And if it does happen, the chance we live to talk about it is less than I'd like. You pose an interesting notion about the excessiveness of causing a great depression before the first bullets fly. I counter that with the effects of conventional warfare being more excessively destructive. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Other Backdoors?
I sarcastically proposed the use of GOST as an alternative to NIST crypto. Someone shot back a note saying the elliptic curves might be 'bent'. Might be interesting for EC to take another look at GOST since it might be the case that the GRU and the NSA both found a similar backdoor but one was better at hiding it than the other. On the NIST side, can anyone explain the reason for this mechanism for truncating SHA512? Denote H(0)′ to be the initial hash value of SHA-512 as specified in Section 5.3.5 above. Denote H(0)′′ to be the initial hash value computed below. H(0) is the IV for SHA-512/t. For i = 0 to 7 { (0)′′ (0)′ Hi = Hi ⊕ a5a5a5a5a5a5a5a5(in hex). } H(0) = SHA-512 (“SHA-512/t”) using H(0)′′ as the IV, where t is the specific truncation value. (end.) [Can't link to FIPS180-4 right now as its down] I really don't like the futzing with the IV like that, not least because a lot of implementations don't give access to the IV. Certainly the object oriented ones I tend to use don't. But does it make the scheme weaker? Is there anything wrong with just truncating the output? The only advantage I can see to the idea is to stop the truncated digest being used as leverage to reveal the full digest in a scheme where one was public and the other was not. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
Having a public bulletin board of posted emails, plus a protocol for anonymously finding the ones your key can decrypt, seems like a pretty decent architecture for prism-proof email. The tricky bit of crypto is in making access to the bulletin board both efficient and private. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote: Just thinking out loud The administrative complexity of a cryptosystem is overwhelmingly in key management and identity management and all the rest of that stuff. So imagine that we have a widely-used inner-level protocol that can use strong crypto, but also requires no external key management. The purpose of the inner protocol is to provide a fallback layer of security, so that even an attack on the outer protocol (which is allowed to use more complicated key management) is unlikely to be able to cause an actual security problem. On the other hand, in case of a problem with the inner protocol, the outer protocol should also provide protection against everything. Without doing any key management or requiring some kind of reliable identity or memory of previous sessions, the best we can do in the inner protocol is an ephemeral Diffie-Hellman, so suppose we do this: a. Generate random a and send aG on curve P256 b. Generate random b and send bG on curve P256 c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. The point is, this is a protocol that happens *inside* the main security protocol. This happens inside TLS or whatever. An attack on TLS then leads to an attack on the whole application only if the TLS attack also lets you do man-in-the-middle attacks on the inner protocol, or if it exploits something about certificate/identity management done in the higher-level protocol. (Ideally, within the inner protcol, you do some checking of the identity using a password or shared secret or something, but that's application-level stuff the inner and outer protocols don't know about. Thoughts? Suggest it on the tls wg list as a feature of 1.3? S --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
The simple(-minded) idea is that everybody receives everybody's email, but can only read their own. Since everybody gets everything, the metadata is uninteresting and traffic analysis is largely fruitless. Some traffic analysis is still possible based on just message originator. If I see a message from A, and then soon see messages from B and C, then I can perhaps assume they are collaborating. If I A's message is significantly larger then the other two, then perhaps they're taking some kind of vote. So while it's a neat hack, I think the claims are overstated. /r$ -- Principal Security Engineer Akamai Technology Cambridge, MA ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote: Very silly but trivial to implement so I went ahead and did so: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl Nice! I like it. A couple of comments: 1. Obviously, this has scaling problems. The interesting question is how to extend it while retaining the good properties. If participants are willing to be identified to within 1/k of all the users of the system (a set which will itself remain hidden by the system), choosing one of k servers based on a hash of the recipient would work. (A concerned recipient could, of course, check servers that he knows can't possibly have his mail.) Can one do better? 2. The system provides complete security for recipients (all you can tell about a recipient is that he can potentially receive messages - though the design has to be careful so that a recipient doesn't, for example, release timing information depending on whether his decryption succeeded or not). However, the protection is more limited for senders. A sender can hide its activity by simply sending random messages, which of course no one will ever be able to decrypt. Of course, that adds yet more load to the entire system. 3. Since there's no acknowledgement when a message is picked up, the number of messages in the system grows without bound. As you suggest, the service will have to throw out messages after some time - but that's a blind process which may discard a message a slow receiver hasn't had a chance to pick up while keeping one that was picked up a long time ago. One way around this, for cooperative senders: When creating a message, the sender selects a random R and appends tag Hash(R). Anyone may later send a you may delete message R message. A sender computes Hash(R), finds any message with that tag, and discards it. (It will still want to delete messages that are old, but it may be able to define old as a larger value if enough of the senders are cooperative.) Since an observer can already tell who created the message with tag H(R), it would normally be the original sender who deletes his messages. Perhaps he knows they are no longer important; or perhaps he received an application-level acknowledgement message from the recipient. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Cool. Drop me a note if you want hosting (gratis) for this. On 10/10/13 10:22 PM, Jerry Leichter wrote: On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote: Very silly but trivial to implement so I went ahead and did so: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl Nice! I like it. A couple of comments: 1. Obviously, this has scaling problems. The interesting question is how to extend it while retaining the good properties. If participants are willing to be identified to within 1/k of all the users of the system (a set which will itself remain hidden by the system), choosing one of k servers based on a hash of the recipient would work. (A concerned recipient could, of course, check servers that he knows can't possibly have his mail.) Can one do better? 2. The system provides complete security for recipients (all you can tell about a recipient is that he can potentially receive messages - though the design has to be careful so that a recipient doesn't, for example, release timing information depending on whether his decryption succeeded or not). However, the protection is more limited for senders. A sender can hide its activity by simply sending random messages, which of course no one will ever be able to decrypt. Of course, that adds yet more load to the entire system. 3. Since there's no acknowledgement when a message is picked up, the number of messages in the system grows without bound. As you suggest, the service will have to throw out messages after some time - but that's a blind process which may discard a message a slow receiver hasn't had a chance to pick up while keeping one that was picked up a long time ago. One way around this, for cooperative senders: When creating a message, the sender selects a random R and appends tag Hash(R). Anyone may later send a you may delete message R message. A sender computes Hash(R), finds any message with that tag, and discards it. (It will still want to delete messages that are old, but it may be able to define old as a larger value if enough of the senders are cooperative.) Since an observer can already tell who created the message with tag H(R), it would normally be the original sender who deletes his messages. Perhaps he knows they are no longer important; or perhaps he received an application-level acknowledgement message from the recipient. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (Darwin) iQIcBAEBAgAGBQJSVxYkAAoJEAWtgNHk7T8Q+uwP/0sWLASYrvKHkVYo4yEjLLYK +s4Yfnz4sBJRUkndj6G3mhk+3lutcMiMhD2pWaTjo/FENCqMveiReI3LiA57aJ9l eaB2whG8pslm+NKirFJ//3AL6mBPJEqeH4QfrfaxNbu61T3oeU9jwihQ/1XpZUxb F1vPGN5GZyrW4GdNBWW+0bzgjoBKsyBNTe/0F/JhtKz/KD6aEQjzeNDJkgm4z6DA Euf+qYT+K3QlWWe8IMxliJcP4HacKhUPO6YUCx6mjbz34zNNa3th4eXXTzlcTWUR LWFXcDnmor3E9yMdFOdtN8+qXvauyi5HGq55Rge3fZ/TqZbNrfPh2AWqDSd/N1rW TFkx9w7b3ndfbkipK51lrdJsZcOudDgvPVnZUZBNm8H7dHi4jb4CJz+Cfr7e7Ar8 wze58qz/kYFqZ7h91e/m4TaIM+jXtPteAM2HZnAAtx3daNqcbcFd8DRtZGdOpjWt ugz2f1NUQrj8f17jUFRwIZfwi2E6wBfKTfVebQy7kMMBbN3fwvIHjyXJTHaz6o0I AX1u3bvAilFdxObwULP4PRl7ReDB42XonCf90VHSDetE/qHQy4CKiIiMrGQIlY7Y NhyAkd3dGvs57TP5gH+d39G0hkJ/iBqgaJtHcU1CwMxYABNasj2yyKPzA7Lvma62 8qzw2uTKepVPUkCjbqcy =mvZ0 -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
Having a public bulletin board of posted emails, plus a protocol for anonymously finding the ones your key can decrypt, seems like a pretty decent architecture for prism-proof email. The tricky bit of crypto is in making access to the bulletin board both efficient and private. This idea has been around for a while but not built AFAIK. http://petworkshop.org/2003/slides/talks/stef/pet2003/Lucky_Green_Anonmail_PET_2003.ppt ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
More random thoughts: The minimal inner protocol would be something like this: Using AES-CCM with a tag size of 32 bits, IVs constructed based on an implicit counter, and an AES-CMAC-based KDF, we do the following: Sender: a. Generate random 128 bit value R b. Use the KDF to compute K[S],N[S],K[R],N[R] = KDF(R, 128+96+128+96) c. Sender's 32-bit unsigned counter C[S] starts at 0. d. Compute IV[S,0] = 96 bits of binary 0s||C[S] e. Send R, CCM(K[S],N[S],IV[S,0],sender_message[0]) Receiver: a. Receive R and derive K[S],N[S],K[R],N[R] from it as above. b. Set Receiver's counter C[R] = 0. c. Compute IV[R,0] = 96 bits of binary 0s||C[R] d. Send CCM(K[R],N[R],IV[R,0],receiver_message[0]) and so on. Note that in this protocol, we never send a key or IV or nonce. The total communications overhead of the inner protocol is an extra 160 bits in the first message and an extra 32 bits thereafter. We're assuming the outer protocol is taking care of message ordering and guaranteed delivery--otherwise, we need to do something more complicated involving replay windows and such, and probably have to send along the message counters. This doesn't provide a huge amount of extra protection--if the attacker can recover more than a very small number of bits from the first message (attacking through the outer protocol), then the security of this protocol falls apart. But it does give us a bare-minimum-cost inner layer of defenses, inside TLS or SSH or whatever other thing we're doing. Both this and the previous protocol I sketched have the property that they expect to be able to generate random numbers. There's a problem there, though--if the system RNG is weak or trapdoored, it could compromise both the inner and outer protocol at the same time. One way around this is to have each endpoint that uses the inner protocol generate its own internal secret AES key, Q[i]. Then, when it's time to generate a random value, the endpoint asks the system RNG for a random number X, and computes E_Q(X). If the attacker knows Q but the system RNG is secure, we're fine. Similarly, if the attacker can predict X but doesn't know Q, we're fine. Even when the attacker can choose the value of X, he can really only force the random value in the beginning of the protocol to repeat. In this protocol, that doesn't do much harm. The same idea works for the ECDH protocol I sketched earlier. I request two 128 bit random values from the system RNG, X, X'. I then use E_Q(X)||E_Q(X') as my ephemeral DH private key. If an attacker knows Q but the system RNG is secure, then we get an unpredictable value for the ECDH key agreement. If an attacker knows X,X' but doesn't know Q, he doesn't know what my ECDH ephemeral private key is. If he forces it to a repeated value, he still doesn't weaken anything except this run of the protocol--no long-term secret is leaked if AES isn't broken. This is subject to endless tweaking and improvement. But the basic idea seems really valuable: a. Design an inner protocol, whose job is to provide redundancy in security against attacks on the outer protocol. b. The inner protocol should be: (i) As cheap as possible in bandwidth and computational terms. (ii) Flexible enough to be used extremely widely, implemented in most places, etc. (iii) Administratively free, adding no key management or related burdens. (iv) Free from revisions or updates, because the whole point of the inner protocol is to provide redundant security. (That's part of administratively free.) (v) There should be one or at most two versions (maybe something like the two I've sketched, but better thought out and analyzed). c. As much as possible, we want the security of the inner protocol to be independent of the security of the outer protocol. (And we want this without wanting to know exactly what the outer protocol will look like.) This means: (i) No shared keys or key material or identity strings or anything. (ii) The inner protocol can't rely on the RNG being good. (iii) Ideally, the crypto algorithms would be different, though that may impose too high a cost. At least, we want as many of the likely failure modes to be different. Comments? I'm not all that concerned with the protocol being perfect, but what do you think of the idea of doing this as a way to add redundant security against protocol attacks? --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 2013-10-10 (283), at 15:29:33, Stephen Farrell stephen.farr...@cs.tcd.ie wrote: On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote: Just thinking out loud [] c. Both sides derive the shared key abG, and then use SHAKE512(abG) to generate an AES key for messages in each direction. How does this prevent MITM? Where does G come from? I'm also leery of using literally the same key in both directions. Maybe a simple transform would suffice; maybe not. d. Each side keeps a sequence number to use as a nonce. Both sides use AES-CCM with their sequence number and their sending key, and keep track of the sequence number of the most recent message received from the other side. If the same key is used, there needs to be a simple way of ensuring the sequence numbers can never overlap each other. __outer ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On 10/10/2013 12:54 PM, John Kelsey wrote: Having a public bulletin board of posted emails, plus a protocol for anonymously finding the ones your key can decrypt, seems like a pretty decent architecture for prism-proof email. The tricky bit of crypto is in making access to the bulletin board both efficient and private. Wrong on both counts, I think. If you make access private, you generate metadata because nobody can get at mail other than their own. If you make access efficient, you generate metadata because you're avoiding the wasted bandwidth that would otherwise prevent the generation of metadata. Encryption is sufficient privacy, and efficiency actively works against the purpose of privacy. The only bow I'd make to efficiency is to split the message stream into channels when it gets to be more than, say, 2GB per day. At that point you would need to know both what channel your recipient listens to *and* the appropriate encryption key before you could send mail. Bear ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
Does PGP have any particular support for key signing parties built in or is this just something that has grown up as a practice of use? It's just a practice. I agree that building a small amount of automation for key signing parties would improve the web of trust. I have started on a prototype that would automate small key signing parties (as small as 2 people, as large as a few dozen) where everyone present has a computer or phone that is on the same wired or wireless LAN. I am specifically thinking of ways that key signing parties might be made scalable so that it was possible for hundreds of thousands of people... An important user experience point is that we should be teaching GPG users to only sign the keys of people who they personally know. Having a signature that says, This person attended the RSA conference in October 2013 is not particularly useful. (Such a signature could be generated by the conference organizers themselves, if they wanted to.) Since the conference organizers -- and most other attendees -- don't know what an attendee's real identity is, their signature on that identity is worthless anyway. So, if I participate in a key signing party with a dozen people, but I only personally know four of them, I will only sign the keys of those four. I may have learned a public key for each of the dozen, but that is separate from me signing those keys. Signing them would assert to any stranger that I know that this key belongs to this identity, which would be false and would undermine the strength of the web of trust. John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 10, 2013, at 5:15 PM, Richard Outerbridge ou...@sympatico.ca wrote: How does this prevent MITM? Where does G come from? I'm assuming G is a systemwide shared parameter. It doesn't prevent mitm--remember the idea here is to make a fairly lightweight protocol to run *inside* another crypto protocol like TLS. The inner protocol mustn't add administrative requirements to the application, which means it can't need key management from some administrator or something. The goal is to have an inner protocol which can run inside TLS or some similar thing, and which adds a layer of added security without the application getting more complicated by needing to worry about more keys or certificates or whatever. Suppose we have this inner protocol running inside a TLS version that is subject to one of the CBC padding reaction attacks. The inner protocol completely blocks that. I'm also leery of using literally the same key in both directions. Maybe a simple transform would suffice; maybe not. I probably wasn't clear in my writeup, but my idea was to have different keys in different directions--there is a NIST KDF that uses only AES as its crypto engine, so this is relatively easy to do using standard components. --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Oct 10, 2013, at 5:20 PM, Ray Dillinger b...@sonic.net wrote: On 10/10/2013 12:54 PM, John Kelsey wrote: Having a public bulletin board of posted emails, plus a protocol for anonymously finding the ones your key can decrypt, seems like a pretty decent architecture for prism-proof email. The tricky bit of crypto is in making access to the bulletin board both efficient and private. Wrong on both counts, I think. If you make access private, you generate metadata because nobody can get at mail other than their own. If you make access efficient, you generate metadata because you're avoiding the wasted bandwidth that would otherwise prevent the generation of metadata. Encryption is sufficient privacy, and efficiency actively works against the purpose of privacy. So the original idea was to send a copy of all the emails to everyone. What I'm wanting to figure out is if there is a way to do this more efficiently, using a public bulletin board like scheme. The goal here would be: a. Anyone in the system can add an email to the bulletin board, which I am assuming is public and cryptographically protected (using a hash chain to make it impossible for even the owner of the bulletin board to alter things once published). b. Anyone can run a protocol with the bulletin board which results in them getting only the encrypted emails addressed to them, and prevents the bulletin board operator from finding out which emails they got. This sounds like something that some clever crypto protocol could do. (It's related to the idea of searching on encrypted data.). And it would make an email system that was really resistant to tracing users. Bear --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
John, On Oct 10, 2013, at 2:31 PM, John Gilmore wrote: An important user experience point is that we should be teaching GPG users to only sign the keys of people who they personally know. Having a signature that says, This person attended the RSA conference in October 2013 is not particularly useful. (Such a signature could be generated by the conference organizers themselves, if they wanted to.) Since the conference organizers -- and most other attendees -- don't know what an attendee's real identity is, their signature on that identity is worthless anyway. So, if I participate in a key signing party with a dozen people, but I only personally know four of them, I will only sign the keys of those four. I may have learned a public key for each of the dozen, but that is separate from me signing those keys. Signing them would assert to any stranger that I know that this key belongs to this identity, which would be false and would undermine the strength of the web of trust. I am going to be interested to hear what the rest of the list says about this, because this definitely contradicts what has been presented to me as 'standard practice' for PGP use -- verifying identity using government issued ID, and completely ignoring personal knowledge. Do you have any insight into what proportion of PGP/GPG users mean their signatures as personal knowledge (my preference and evidently yours), versus government ID (my perception of the community standard best practice), versus no verification in particular (my perception of the actual common practice in many cases)? (In my ideal world, we'd have a machine readable way of indication what sort of verification was performed. Signing policies, not being machine readable or widely used, don't cover this well. There is space for key-value annotations in signature packets, which could help with this if we standardized on some.) Glenn Willen ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On 10/10/2013 02:20 PM, Ray Dillinger wrote: split the message stream into channels when it gets to be more than, say, 2GB per day. That's fine, in the case where the traffic is heavy. We should also discuss the opposite case: *) If the traffic is light, the servers should generate cover traffic. *) Each server should publish a public key for /dev/null so that users can send cover traffic upstream to the server, without worrying that it might waste downstream bandwidth. This is crucial for deniabililty: If the rubber-hose guy accuses me of replying to ABC during the XYZ crisis, I can just shrug and say it was cover traffic. Also: *) Messages should be sent in standard-sized packets, so that the message-length doesn't give away the game. *) If large messages are common, it might help to have two streams: -- the pointer stream, and -- the bulk stream. It would be necessary to do a trial-decode on every message in the pointer stream, but when that succeeds, it yields a pilot message containing the fingerprints of the packets that should be pulled out of the bulk stream. The first few bytes of the packet should be a sufficient fingerprint. This reduces the number of trial- decryptions by a factor of roughly sizeof(message) / sizeof(packet). From the keen-grasp-of-the-obvious department: *) Forward Secrecy is important here. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Thu, Oct 10, 2013 at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote: To send a prism-proof email, encrypt it for your recipient and send it to irrefrangi...@mail.unipay.nl. Don't include any information about To receive prism-proof email, subscribe to the irrefrangible mailing list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/. Use a This is the same as NNTP, but worse in that it's not distributed. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] prism-proof email in the degenerate case
On Thu, 2013-10-10 at 14:20 -0700, Ray Dillinger wrote: Wrong on both counts, I think. If you make access private, you generate metadata because nobody can get at mail other than their own. If you make access efficient, you generate metadata because you're avoiding the wasted bandwidth that would otherwise prevent the generation of metadata. Encryption is sufficient privacy, and efficiency actively works against the purpose of privacy. The only bow I'd make to efficiency is to split the message stream into channels when it gets to be more than, say, 2GB per day. At that point you would need to know both what channel your recipient listens to *and* the appropriate encryption key before you could send mail. This is starting to sound a lot like Bitmessage, doesn't it? A central message stream that is split into a tree of streams when it gets too busy and everyone tries to decrypt every message in their stream to see if they are the recipient. In the case of BM the stream is distributed in a P2P network, the stream of an address is found by walking the tree, and you need a hash collision proof-of-work in order for other peers to accept your sent messages. The P2P aspect and the proof-of-work (according to the whitepaper[1] it should represent 4 minutes of work on an average computer) probably makes it less attractive for mobile devices though. [1] https://bitmessage.org/bitmessage.pdf --ll signature.asc Description: This is a digitally signed message part ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] PGP Key Signing parties
On Oct 10, 2013, at 2:31 PM, John Gilmore g...@toad.com wrote: Does PGP have any particular support for key signing parties built in or is this just something that has grown up as a practice of use? It's just a practice. I agree that building a small amount of automation for key signing parties would improve the web of trust. I have started on a prototype that would automate small key signing parties (as small as 2 people, as large as a few dozen) where everyone present has a computer or phone that is on the same wired or wireless LAN. Phil Zimmerman and Jon Callas had started to work on that around 1998, they might still have some of that design around. --Paul Hoffman ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Thu, Oct 10, 2013 at 3:32 PM, John Kelsey crypto@gmail.com wrote: The goal is to have an inner protocol which can run inside TLS or some similar thing [...] Suppose we have this inner protocol running inside a TLS version that is subject to one of the CBC padding reaction attacks. The inner protocol completely blocks that. If you can design an inner protocol to resist such attacks - which you can, easily - why wouldn't you just design the outer protocol the same way? Trevor ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Other Backdoors?
Thursday, October 10, 2013, Phillip Hallam-Baker wrote: [Can't link to FIPS180-4 right now as its down] For the lazy among us, including my future self, a shutdown-proof url to the archive.org copy of the NIST FIPS 180-4 pdf: http://tinyurl.com/FIPS180-4 -David Mercer -- David Mercer - http://dmercer.tumblr.com IM: AIM: MathHippy Yahoo/MSN: n0tmusic Facebook/Twitter/Google+/Linkedin: radix42 FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7 PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt Fingerprint: A24F 5816 2B08 5B37 5096 9F52 B182 3349 0F23 225B ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Thursday, October 10, 2013, Salz, Rich wrote: TLS was designed to support multiple ciphersuites. Unfortunately this opened the door to downgrade attacks, and transitioning to protocol versions that wouldn't do this was nontrivial. The ciphersuites included all shared certain misfeatures, leading to the current situation. On the other hand, negotiation let us deploy it in places where full-strength cryptography is/was regulated. Sometimes half a loaf is better than nothing. The last time various SSL/TLS ciphersuites needed to be removed from webserver configurations when I managed a datacenter some years ago led to the following 'failure modes', either from the user's browser now warning or refusing to connect to a server using an insecure cipher suite, or when the only cipher suites used by a server weren't supported by an old browser (or both at once): 1) for sites that had low barriers to switching, loss of traffic/customers to sites that didn't drop the insecure ciphersuites 2) for sites that are harder to leave (your bank, google/facebook level sticky public ones [less common]), large increases in calls to support, with large costs for the business. Non-PCI compliant businesses taking CC payments are generally so insecure that customers that fled to them really are uppung their chances of suffering fraud. In both cases you have a net decrease of security and an increase of fraud and financial loss. So in some cases anything less than a whole loaf, which you can't guarantee for N years of time, isn't 'good enough.' In other words, we are screwed no matter what. -David Mercer -- David Mercer - http://dmercer.tumblr.com IM: AIM: MathHippy Yahoo/MSN: n0tmusic Facebook/Twitter/Google+/Linkedin: radix42 FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7 PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt Fingerprint: A24F 5816 2B08 5B37 5096 9F52 B182 3349 0F23 225B ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Tue, Oct 8, 2013 at 7:38 AM, Jerry Leichter leich...@lrw.com wrote: On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote: If we can't select ciphersuites that we are sure we will always be comfortable with (for at least some forseeable lifetime) then we urgently need the ability to *stop* using them at some point. The examples of MD5 and RC4 make that pretty clear. Ceasing to use one particular encryption algorithm in something like SSL/TLS should be the easiest case--we don't have to worry about old signatures/certificates using the outdated algorithm or anything. And yet we can't reliably do even that. We seriously need to consider what the design lifespan of our crypto suites is in real life. That data should be communicated to hardware and software designers so they know what kind of update schedule needs to be supported. Users of the resulting systems need to know that the crypto standards have a limited life so they can include update in their installation planning. This would make a great April Fool's RFC, to go along with the classic evil bit. :-( There are embedded systems that are impractical to update and have expected lifetimes measured in decades. RFID chips include cryptography, are completely un-updatable, and have no real limit on their lifetimes - the percentage of the population represented by any given vintage of chips will drop continuously, but it will never go to zero. We are rapidly entering a world in which devices with similar characteristics will, in sheer numbers, dominate the ecosystem - see the remote-controllable Phillips Hue light bulbs ( http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b) as an early example. (Oh, and there's been an attack against them: http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/. The response from Phillips to that article says In developing Hue we have used industry standard encryption and authentication techni ques [O]ur main advice to customers is that they take steps to ensure they are secured from malicious attacks at a network level. The obvious solution: Do it right the first time. Many of the TLS issues we are dealing with today were known at the time the standard was being developed. RFID usually isn't that security critical: if a shirt insists its an ice cream, a human will usually be around to see that it is a shirt. AES will last forever, unless cryptoanalytic advances develop. Quantum computers will doom ECC, but in the meantime we are good. Cryptography in the two parties authenticating and communicating is a solved problem. What isn't solved, and behind many of these issues is 1) getting the standard committees up to speed and 2) deployment/PKI issues. I'm afraid the reality is that we have to design for a world in which some devices will be running very old versions of code, speaking only very old versions of protocols, pretty much forever. In such a world, newer devices either need to shield their older brethren from the sad realities or relegate them to low-risk activities by refusing to engage in high-risk transactions with them. It's by no means clear how one would do this, but there really aren't any other realistic alternatives. Great big warning lights saying Insecure device! Do not trust!. If Wells Fargo customers got a Warning: This site is using outdated security when visiting it on all browsers, they would fix that F5 terminator currently stopping the rest of us from deploying various TLS extensions. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography -- Those who would give up Essential Liberty to purchase a little Temporary Safety deserve neither Liberty nor Safety. -- Benjamin Franklin ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
On Wed, Oct 9, 2013 at 12:44 AM, Tim Newsham tim.news...@gmail.com wrote: We are more vulnerable to widespread acceptance of these bad principles than almost anyone, ultimately, But doing all these things has won larger budgets and temporary successes for specific people and agencies today, whereas the costs of all this will land on us all in the future. The same could be (and has been) said about offensive cyber warfare. I said the same thing in the launch issue of cyber-defense. Unfortunately the editor took it into his head to conflate inventing the HTTP referer field etc. with rather more and so I can't point people at the article as they refuse to correct it. I see cyber-sabotage as being similar to use of chemical or biological weapons: It is going to be banned because the military consequences fall far short of being decisive, are unpredictable and the barriers to entry are low. STUXNET has been relaunched with different payloads countless times. So we are throwing stones the other side can throw back with greater force. We have a big problem in crypto because we cannot now be sure that the help received from the US government in the past has been well intentioned or not. And so a great deal of time is being wasted right now (though we will waste orders of magnitude more of their time). At the moment we have a bunch of generals and contractors telling us that we must spend billions on the ability to attack China's power system in case they attack ours. If we accept that project then we can't share technology that might help them defend their power system which cripples our ability to defend our own. So a purely hypothetical attack promoted for the personal enrichment of a few makes us less secure, not safer. And the power systems are open to attack by sufficiently motivated individuals. The sophistication of STUXNET lay in its ability to discriminate the intended target from others. The opponents we face simply don't care about collateral damage. So I am not impressed by people boasting about the ability of some country (not an ally of my country BTW) to perform targeted murder overlooks the fact that they can and likely will retaliate with indiscriminate murder in return. I bet people are less fond of drones when they start to realize other countries have them as well. Lets just stick to defense and make the NATO civilian infrastructure secure against cyber attack regardless of what making that technology public might do for what some people insist we should consider enemies. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] The cost of National Security Letters
One of the biggest problems with the current situation is that US technology companies have no ability to convince others that their equipment has not been compromised by a government mandated backdoor. This is imposing a significant and real cost on providers of outsourced Web Services and is beginning to place costs on manufacturers. International customers are learning to shop elsewhere for their IT needs. While moving from the US to the UK might seem to leave the customer equally vulnerable to warrant-less NSA/GCHQ snooping, there is a very important difference. A US provider can be silenced using a National Security Letter which is an administrative order issued by a government agency without any court sanction. There is no equivalent capability in UK law. A UK court can make an intercept order or authorize a search etc. but that is by definition a Lawful Intercept and that capability exists regardless of jurisdiction. What is unique in the US at the moment is the National Security Letter. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Elliptic curve question
On 2013-10-08 03:14, Phillip Hallam-Baker wrote: Are you planning to publish your signing key or your decryption key? Use of a key for one makes the other incompatible.� Incorrect. One's public key is always an elliptic point, one's private key is always a number. Thus there is no reason in principle why one cannot use the same key (a number) for signing the messages you send, and decrypting the messages you receive. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote: On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote: We seriously need to consider what the design lifespan of our crypto suites is in real life. That data should be communicated to hardware and software designers so they know what kind of update schedule needs to be supported. Users of the resulting systems need to know that the crypto standards have a limited life so they can include update in their installation planning. This would make a great April Fool's RFC, to go along with the classic evil bit. :-( I think the situation is much more serious than this comment makes it appear. As professionals, we have an obligation to share our knowledge of the limits of our technology with the people who are depending on it. We know that all crypto standards which are 15 years old or older are obsolete, not recommended for current use, or outright dangerous. We don't know of any way to avoid this problem in the future. I think the burden of proof is on the people who suggest that we only have to do it right the next time and things will be perfect. These proofs should address: New applications of old attacks. The fact that new attacks continue to be discovered. The existence of powerful actors subverting standards. The lack of a did right example to point to. There are embedded systems that are impractical to update and have expected lifetimes measured in decades... Many perfectly good PC's will stay on XP forever because even if there was the will and staff to upgrade, recent versions of Windows won't run on their hardware. ... I'm afraid the reality is that we have to design for a world in which some devices will be running very old versions of code, speaking only very old versions of protocols, pretty much forever. In such a world, newer devices either need to shield their older brethren from the sad realities or relegate them to low-risk activities by refusing to engage in high-risk transactions with them. It's by no means clear how one would do this, but there really aren't any other realistic alternatives. Users of this old equipment will need to make a security/cost tradeoff based on their requirements. The ham radio operator who is still running Windows 98 doesn't really concern me. (While his internet connected system might be a bot, the bot controllers will protect his computer from others, so his radio logs and radio firmware update files are probably safe.) I've already commented on the risks of sending Mailman passwords in the clear. Low value/low risk targets don' need titanium security. The power plant which can be destroyed by a cyber attack, c.f. STUXNET, does concern me. Gas distribution systems do concern me. Banking transactions do concern me, particularly business accounts. (The recommendations for online business accounts include using a dedicated computer -- good advice.) Perhaps the shortest limit on the lifetime of an embedded system is the security protocol, and not the hardware. If so, how do we as society deal with this limit. Cheers -- Bill --- Bill Frantz| gets() remains as a monument | Periwinkle (408)356-8506 | to C's continuing support of | 16345 Englewood Ave www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] PGP Key Signing parties
Does PGP have any particular support for key signing parties built in or is this just something that has grown up as a practice of use? I am looking at different options for building a PKI for securing personal communications and it seems to me that the Key Party model could be improved on if there were some tweaks so that key party signing events were a distinct part of the model. I am specifically thinking of ways that key signing parties might be made scalable so that it was possible for hundreds of thousands of people to participate in an event and there were specific controls to ensure that the use of the key party key was strictly bounded in space and time. So for example, it costs $2K to go to RSA. So if there is a key signing event associated that requires someone to be physically present then that is a $2K cost factor that we can leverage right there. Now we can all imagine ways in which folk on this list could avoid or evade such controls but they all have costs. I think it rather unlikely that any of you would want to be attempting to impersonate me at multiple cons. If there is a CT infrastructure then we can ensure that the use of the key party key is strictly limited to that one event and that even if the key is not somehow destroyed after use that it is not going to be trusted. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote: On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote: If we are going to always use a construction like AES(KDF(key)), as Nico suggests, why not go further and use a KDF with variable length output like Keccak to replace the AES key schedule? And instead of making provisions to drop in a different cipher should a weakness be discovered in AES, make the number of AES (and maybe KDF) rounds a negotiated parameter. Given that x86 and ARM now have AES round instructions, other cipher algorithms are unlikely to catch up in performance in the foreseeable future, even with an higher AES round count. Increasing round count is effortless compared to deploying a new cipher algorithm, even if provision is made the protocol. Dropping such provisions (at least in new designs) simplifies everything and simplicity is good for security. That's a really nice idea. It has a non-obvious advantage: Suppose the AES round instructions (or the round key computations instructions) have been spiked to leak information in some non-obvious way - e.g., they cause a power glitch that someone with the knowledge of what to look for can use to read of some of the key bits. The round key computation instructions obviously have direct access to the actual key, while the round computation instructions have access to the round keys, and with the standard round function, given the round keys it's possible to determine the actual key. If, on the other hand, you use a cryptographically secure transformation from key to round key, and avoid the built-in round key instructions entirely; and you use CTR mode, so that the round computation instructions never see the actual data; then AES round computation functions have nothing useful to leak (unless they are leaking all their output, which would require a huge data rate and would be easily noticed). This also means that even if the round instructions are implemented in software which allows for side-channel attacks (i.e., it uses an optimized table instruction against which cache attacks work), there's no useful data to *be* leaked. At least in the Intel AES instruction set, the encode and decode instruction have access to each round key except the first. So they could leak that data, and it's at least conceivable that one can recover the first round key from later ones (perhaps this has been analyzed?). Knowing all the round keys of course enables one to decode the data. Still, this greatly increases the volume o data that must be leaked and if any instructions are currently spiked, it is most likely the round key generation assist instruction. One could include an IV in the initial hash, so no information could be gained about the key itself. This would work with AES(KDF(key+IV)) as well, however. So this is a mode for safely using possibly rigged hardware. (Of course there are many other ways the hardware could be rigged to work against you. But with their intended use, hardware encryption instructions have a huge target painted on them.) Of course, Keccak itself, in this mode, would have access to the real key. However, it would at least for now be implemented in software, and it's designed to be implementable without exposing side-channel attacks. There are two questions that need to be looked at: 1. Is AES used with (essentially) random round keys secure? At what level of security? One would think so, but this needs to be looked at carefully. The fact that the round keys are simply xor'd with the AES state at the start of each round suggest this likely secure. One would have to examine the KDF to make sure the there is nothing comparable to the related key attacks on the AES key set up. 2. Is the performance acceptable? The comparison would be to AES(KDF(key)). And in how many applications is key agility critical? BTW, some of the other SHA-3 proposals use the AES round transformation as a primitive, so could also potentially be used in generating a secure round key schedule. That might (or might not) put security-critical information back into the hardware instructions. If Keccak becomes the standard, we can expect to see a hardware Keccak-f implementation (the inner transformation that is the basis of each Keeccak round) at some point. Could that be used in a way that doesn't give it the ability to leak critical information? -- Jerry Given multi-billion transistor CPU chips with no means to audit them, It's hard to see how they can be fully trusted. Arnold Reinhold ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
On 2013-10-08 02:03, John Kelsey wrote: Alongside Phillip's comments, I'll just point out that assassination of key people is a tactic that the US and Israel probably don't have any particular advantages in. It isn't in our interests to encourage a worldwide tacit acceptance of that stuff. Israel is famous for its competence in that area. And if the US is famously incompetent, that is probably lack of will, rather than lack of ability. Drones give the US technological supremacy in the selective removal of key people ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] P=NP on TV
On 10/07/2013 05:28 PM, David Johnston wrote: We are led to believe that if it is shown that P = NP, we suddenly have a break for all sorts of algorithms. So if P really does = NP, we can just assume P = NP and the breaks will make themselves evident. They do not. Hence P != NP. As I see it, it's still possible. Proving that a solution exists does not necessarily show you what the solution is or how to find it. And just because a solution is subexponential is no reason a priori to suspect that it's cheaper than some known exponential solution for any useful range of values. So, to me, this is an example of TV getting it wrong. If someone ever proves P=NP, I expect that there will be thunderous excitement in the math community, leaping hopes in the hearts of investors and technologists, and then very careful explanations by the few people who really understand the proof that it doesn't mean we can actually do anything we couldn't do before. Bear ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
On Oct 8, 2013, at 6:10 PM, Arnold Reinhold wrote: On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote: On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote: If we are going to always use a construction like AES(KDF(key)), as Nico suggests, why not go further and use a KDF with variable length output like Keccak to replace the AES key schedule? And instead of making provisions to drop in a different cipher should a weakness be discovered in AES, make the number of AES (and maybe KDF) rounds a negotiated parameter. Given that x86 and ARM now have AES round instructions, other cipher algorithms are unlikely to catch up in performance in the foreseeable future, even with an higher AES round count. Increasing round count is effortless compared to deploying a new cipher algorithm, even if provision is made the protocol. Dropping such provisions (at least in new designs) simplifies everything and simplicity is good for security. That's a really nice idea. It has a non-obvious advantage: Suppose the AES round instructions (or the round key computations instructions) have been spiked to leak information in some non-obvious way - e.g., they cause a power glitch that someone with the knowledge of what to look for can use to read of some of the key bits. The round key computation instructions obviously have direct access to the actual key, while the round computation instructions have access to the round keys, and with the standard round function, given the round keys it's possible to determine the actual key. If, on the other hand, you use a cryptographically secure transformation from key to round key, and avoid the built-in round key instructions entirely; and you use CTR mode, so that the round computation instructions never see the actual data; then AES round computation functions have nothing useful to leak (unless they are leaking all their output, which would require a huge data rate and would be easily noticed). This also means that even if the round instructions are implemented in software which allows for side-channel attacks (i.e., it uses an optimized table instruction against which cache attacks work), there's no useful data to *be* leaked. At least in the Intel AES instruction set, the encode and decode instruction have access to each round key except the first. So they could leak that data, and it's at least conceivable that one can recover the first round key from later ones (perhaps this has been analyzed?). Knowing all the round keys of course enables one to decode the data. Still, this greatly increases the volume o data that must be leaked and if any instructions are currently spiked, it is most likely the round key generation assist instruction. One could include an IV in the initial hash, so no information could be gained about the key itself. This would work with AES(KDF(key+IV)) as well, however. So this is a mode for safely using possibly rigged hardware. (Of course there are many other ways the hardware could be rigged to work against you. But with their intended use, hardware encryption instructions have a huge target painted on them.) Of course, Keccak itself, in this mode, would have access to the real key. However, it would at least for now be implemented in software, and it's designed to be implementable without exposing side-channel attacks. There are two questions that need to be looked at: 1. Is AES used with (essentially) random round keys secure? At what level of security? One would think so, but this needs to be looked at carefully. The fact that the round keys are simply xor'd with the AES state at the start of each round suggest this likely secure. One would have to examine the KDF to make sure the there is nothing comparable to the related key attacks on the AES key set up. 2. Is the performance acceptable? The comparison would be to AES(KDF(key)). And in how many applications is key agility critical? BTW, some of the other SHA-3 proposals use the AES round transformation as a primitive, so could also potentially be used in generating a secure round key schedule. That might (or might not) put security-critical information back into the hardware instructions. If Keccak becomes the standard, we can expect to see a hardware Keccak-f implementation (the inner transformation that is the basis of each Keeccak round) at some point. Could that be used in a way that doesn't give it the ability to leak critical information? -- Jerry Given multi-billion transistor CPU chips with no means to audit them, It's hard to see how they can be fully trusted. Arnold Reinhold ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Iran and murder
We are more vulnerable to widespread acceptance of these bad principles than almost anyone, ultimately, But doing all these things has won larger budgets and temporary successes for specific people and agencies today, whereas the costs of all this will land on us all in the future. The same could be (and has been) said about offensive cyber warfare. --John -- Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] ADMIN: Reminders and No General Political Discussion please
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 FYI I'm helping Perry out with Moderator duties. I've noticed an upswing on political discussion that are starting to range into security issues and less on Cryptography. Consider this a gentle reminder that that's not really the charter of this group. I understand how it is impossible to separate the two these days, but let's try to lean more toward the technical rather than political side of things. Here's a basic reminder from Perry of the other rules of the mailing list for all the new members. == We've got a very large number of participants on this list, and volume has gone way up at the moment thanks to current events. To make the experience pleasant for everyone please: 1) Cut down the original you're quoting to only the relevant portions to minimize the amount of reading the 1600 people who will be seeing your post will have to do. 2) Do not top post. I've explained why repeatedly. 3) Try to make sure what you are saying is interesting enough and on topic. Minor asides etc. are not. The list is moderated for a reason, and if you top post a one liner followed by a 75 line intact original, be prepared to see a rejection message. Tamzen -BEGIN PGP SIGNATURE- Version: PGP Universal 3.2.0 (Build 1672) Charset: us-ascii wj8DBQFSVb2P5/HCKu9Iqw4RAljLAJ4oh46krUDlyEgV6nTSdvCbc2pL8QCdFiTk jLViuUIhJse2Si23aDHuK2I= =EAqu -END PGP SIGNATURE- ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Elliptic curve question
On Tue, Oct 8, 2013 at 4:14 PM, James A. Donald jam...@echeque.com wrote: On 2013-10-08 03:14, Phillip Hallam-Baker wrote: Are you planning to publish your signing key or your decryption key? Use of a key for one makes the other incompatible.� Incorrect. One's public key is always an elliptic point, one's private key is always a number. Thus there is no reason in principle why one cannot use the same key (a number) for signing the messages you send, and decrypting the messages you receive. The original author was proposing to use the same key for encryption and signature which is a rather bad idea. -- Website: http://hallambaker.com/ ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote: On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote: On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote: We seriously need to consider what the design lifespan of our crypto suites is in real life. That data should be communicated to hardware and software designers so they know what kind of update schedule needs to be supported. Users of the resulting systems need to know that the crypto standards have a limited life so they can include update in their installation planning. This would make a great April Fool's RFC, to go along with the classic evil bit. :-( I think the situation is much more serious than this comment makes it appear. As professionals, we have an obligation to share our knowledge of the limits of our technology with the people who are depending on it. We know that all crypto standards which are 15 years old or older are obsolete, not recommended for current use, or outright dangerous. We don't know of any way to avoid this problem in the future. 15 years ago is 1997. Diffie-Hellman is much, much older and still works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988, Schnorr signature 1989. I think the burden of proof is on the people who suggest that we only have to do it right the next time and things will be perfect. These proofs should address: New applications of old attacks. The fact that new attacks continue to be discovered. The existence of powerful actors subverting standards. The lack of a did right example to point to. As one of the Do it right the first time people I'm going to argue that the experience with TLS shows that extensibility doesn't work. TLS was designed to support multiple ciphersuites. Unfortunately this opened the door to downgrade attacks, and transitioning to protocol versions that wouldn't do this was nontrivial. The ciphersuites included all shared certain misfeatures, leading to the current situation. TLS is difficult to model: the use of key confirmation makes standard security notions not applicable. The fact that every cipher suite is indicated separately, rather than using generic composition makes configuration painful. In addition bugs in widely deployed TLS accelerators mean that the claimed upgradability doesn't actually exist. Implementations can work without supporting very necessary features. Had the designers of TLS used a three-pass Diffie-Hellman protocol with encrypt-then-mac, rather than the morass they came up with, we wouldn't be in this situation today. TLS was not exploring new ground: it was well hoed turf intellectually, and they still screwed it up. Any standard is only an approximation to what is actually implemented. Features that aren't used are likely to be skipped or implemented incorrectly. Protocols involving crypto need to be so damn simple that if it connects correctly, the chance of a bug is vanishingly small. If we make a simple protocol, with automated analysis of its security, the only danger is a primitive failing, in which case we are in trouble anyway. There are embedded systems that are impractical to update and have expected lifetimes measured in decades... Many perfectly good PC's will stay on XP forever because even if there was the will and staff to upgrade, recent versions of Windows won't run on their hardware. ... I'm afraid the reality is that we have to design for a world in which some devices will be running very old versions of code, speaking only very old versions of protocols, pretty much forever. In such a world, newer devices either need to shield their older brethren from the sad realities or relegate them to low-risk activities by refusing to engage in high-risk transactions with them. It's by no means clear how one would do this, but there really aren't any other realistic alternatives. -- Those who would give up Essential Liberty to purchase a little Temporary Safety deserve neither Liberty nor Safety. -- Benjamin Franklin ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 8, 2013, at 4:46 PM, Bill Frantz fra...@pwpconsult.com wrote: I think the situation is much more serious than this comment makes it appear. As professionals, we have an obligation to share our knowledge of the limits of our technology with the people who are depending on it. We know that all crypto standards which are 15 years old or older are obsolete, not recommended for current use, or outright dangerous. We don't know of any way to avoid this problem in the future. We know how to address one part of this problem--choose only algorithms whose design strength is large enough that there's not some relatively close by time when the algorithms will need to be swapped out. That's not all that big a problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in the far future, your users need only fear cryptanalysis, not Moore's Law. Really, even with 128-bit security level primitives, it will be a very long time until the brute-force attacks are a concern. This is actually one thing we're kind-of on the road to doing right in standards now--we're moving away from barely-strong-enough crypto and toward crypto that's going to be strong for a long time to come. Protocol attacks are harder, because while we can choose a key length, modulus size, or sponge capacity to support a known security level, it's not so easy to make sure that a protocol doesn't have some kind of attack in it. I think we've learned a lot about what can go wrong with protocols, and we can design them to be more ironclad than in the past, but we still can't guarantee we won't need to upgrade. But I think this is an area that would be interesting to explore--what would need to happen in order to get more ironclad protocols? A couple random thoughts: a. Layering secure protocols on top of one another might provide some redundancy, so that a flaw in one didn't undermine the security of the whole system. b. There are some principles we can apply that will make protocols harder to attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is allowed to need change its execution path or timing based on the key or plaintext, every message includes a sequence number and the hash of the previous message, etc. This won't eliminate protocol attacks, but will make them less common. c. We could try to treat at least some kinds of protocols more like crypto algorithms, and expect to have them widely vetted before use. What else? ... Perhaps the shortest limit on the lifetime of an embedded system is the security protocol, and not the hardware. If so, how do we as society deal with this limit. What we really need is some way to enforce protocol upgrades over time. Ideally, there would be some notion that if you support version X of the protocol, this meant that you would not support any version lower than, say, X-2. But I'm not sure how practical that is. Cheers -- Bill --John ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On 10/6/13 at 8:26 AM, crypto@gmail.com (John Kelsey) wrote: If we can't select ciphersuites that we are sure we will always be comfortable with (for at least some forseeable lifetime) then we urgently need the ability to *stop* using them at some point. The examples of MD5 and RC4 make that pretty clear. Ceasing to use one particular encryption algorithm in something like SSL/TLS should be the easiest case--we don't have to worry about old signatures/certificates using the outdated algorithm or anything. And yet we can't reliably do even that. We seriously need to consider what the design lifespan of our crypto suites is in real life. That data should be communicated to hardware and software designers so they know what kind of update schedule needs to be supported. Users of the resulting systems need to know that the crypto standards have a limited life so they can include update in their installation planning. Cheers - Bill --- Bill Frantz| If the site is supported by | Periwinkle (408)356-8506 | ads, you are the product.| 16345 Englewood Ave www.pwpconsult.com | | Los Gatos, CA 95032 ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
On Oct 4, 2013, at 12:20 PM, Ray Dillinger wrote: So, it seems that instead of AES256(key) the cipher in practice should be AES256(SHA256(key)). Is it not the case that (assuming SHA256 is not broken) this defines a cipher effectively immune to the related-key attack? So you're essentially saying that AES would be stronger if it had a different key schedule? At 08:59 AM 10/5/2013, Jerry Leichter wrote: - If this is the primitive black box that does a single block encryption, you've about doubled the cost and you've got this messy combined thing you probably won't want to call a primitive. You've doubled the cost of key scheduling, but usually that's more like one-time than per-packet. If the hash is complex, you might have also doubled the cost of silicon for embedded apps, which is more of a problem. - If you say well, I'll take the overall key and replace it by its hash, you're defining a (probably good) protocol. But once you're defining a protocol, you might as well just specify random keys and forget about the hash. I'd expect that the point of related-key attacks is to find weaknesses in key scheduling that are exposed by deliberately NOT using random keys when the protocol's authors wanted you to use them. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] AES-256- More NIST-y? paranoia
Le 7 oct. 2013 à 17:45, Arnold Reinhold a...@me.com a écrit : other cipher algorithms are unlikely to catch up in performance in the foreseeable future You should take a look a this algorithm : http://eprint.iacr.org/2013/551.pdf - The block size is variable and unknown from an attacker. - The size of the key has no limit and is unknown from an attacker. - The key size does not affect the algorithm speed (using a 256 bit key is the same as using a 1024 bit key). - The algorithm is much faster than the average cryptographic function. Experimental test showed 600 Mo/s - 4 cycles/byte on an Intel Core 2 Duo P8600 2.40GHz and 1,2 Go/s - 2 cycles/byte on an Intel i5-3210M 2.50GHz. Both CPU had only 2 cores. ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?
On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote: If we can't select ciphersuites that we are sure we will always be comfortable with (for at least some forseeable lifetime) then we urgently need the ability to *stop* using them at some point. The examples of MD5 and RC4 make that pretty clear. Ceasing to use one particular encryption algorithm in something like SSL/TLS should be the easiest case--we don't have to worry about old signatures/certificates using the outdated algorithm or anything. And yet we can't reliably do even that. We seriously need to consider what the design lifespan of our crypto suites is in real life. That data should be communicated to hardware and software designers so they know what kind of update schedule needs to be supported. Users of the resulting systems need to know that the crypto standards have a limited life so they can include update in their installation planning. This would make a great April Fool's RFC, to go along with the classic evil bit. :-( There are embedded systems that are impractical to update and have expected lifetimes measured in decades. RFID chips include cryptography, are completely un-updatable, and have no real limit on their lifetimes - the percentage of the population represented by any given vintage of chips will drop continuously, but it will never go to zero. We are rapidly entering a world in which devices with similar characteristics will, in sheer numbers, dominate the ecosystem - see the remote-controllable Phillips Hue light bulbs (http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b) as an early example. (Oh, and there's been an attack against them: http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/. The response from Phillips to that article says In developing Hue we have used industry standard encryption and authentication techni ques [O]ur main advice to customers is that they take steps to ensure they are secured from malicious attacks at a network level. Even in the PC world, where updates are a part of life, makers eventually stop producing them for older products. Windows XP, as of about 10 months ago, was running on 1/4 of all PC's - many 100's of millions of PC's. About 9 months from now, Microsoft will ship its final security update for XP. Many perfectly good PC's will stay on XP forever because even if there was the will and staff to upgrade, recent versions of Windows won't run on their hardware. In the Mac world, hardware in general tends to live longer, and there's plenty of hardware still running that can't run recent OS's. Apple pretty much only does patches for at most 3 versions of the OS (with a new version roughly every year). The Linux world isn't really much different except that it's less likely to drop support for old hardware, and because it tends to be used by a more techie audience who are more likely to upgrade, the percentages probably look better, at least for PC's. (But there are antique versions of Linux hidden away in all kinds of appliances that no one ever upgrades.) I'm afraid the reality is that we have to design for a world in which some devices will be running very old versions of code, speaking only very old versions of protocols, pretty much forever. In such a world, newer devices either need to shield their older brethren from the sad realities or relegate them to low-risk activities by refusing to engage in high-risk transactions with them. It's by no means clear how one would do this, but there really aren't any other realistic alternatives. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Re: [Cryptography] Elliptic curve question
On Mon, 7 Oct 2013 10:54:50 +0200 Lay András and...@lay.hu wrote: I made a simple elliptic curve utility in command line PHP: https://github.com/LaySoft/ecc_phgp I know in the RSA, the sign is inverse operation of encrypt, so two different keypairs needs for encrypt and sign. In elliptic curve cryptography, the sign is not the inverse operation of encrypt, so my application use same keypair for encrypt and sign. Is this correct? The very general answer: If it's not a big problem, it's always better to separate encryption and signing keys - because you never know if there are yet unknown interactions if you use the same key material in different use cases. You can even say this more general: It's always better to use one key for one usage case. It doesn't hurt and it may prevent security issues. -- Hanno Böck http://hboeck.de/ mail/jabber: ha...@hboeck.de GPG: BBB51E42 signature.asc Description: PGP signature ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] Always a weakest link
The article is about security in the large, not cryptography specifically, but http://www.eweek.com/security/enterprises-apply-wrong-policies-when-blocking-cloud-sites-says-study.html points out that many companies think that they are increasing their security by blocking access to sites they consider risky - only to have their users migrate to less well known sites doing the same thing - and those less well known sites are often considerably riskier. My favorite quote: One customer found a user who sent out a million tweets in a day, but in reality, its compromised systems were exporting data 140 characters at a time via the tweets. -- Jerry ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
[Cryptography] RSA-210 factored
Hi guys, Thought this might (still) be of some interest: http://www.mersenneforum.org/showpost.php?p=354259 rtf ___ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography