On 5/14/06, Victor Duchovni <[EMAIL PROTECTED]> wrote:
Security is fragile. Deviating from well understood primitives may be good research, but is not good engineering. Especially fragile are:
Point taken. This is not for a production system, it's a research thing.
TLS (available via OpenSSL) provides integrity and authentication, any reason to re-invent the wheel? It took multiple iterations of design improvements to get TLS right, even though it was designed by experts.
IIUC, protocol design _should_ be easy, you just perform some finite-state analysis and verify that, assuming your primitives are ideal, no protocol-level operations break it. The 7th Usenix Security Symposium has a paper where the authors built up SSL 3.0 to find out what attack each datum was meant to prevent. They used mur-phi, which has been used for VLSI verification (i.e. large numbers of states). AT&T published some code to do it too (called SPIN). It's effective if the set of attacks you're protecting against is finite and enumerable (for protocol design, I think it should be; reflection, replay, reorder, suppress, inject, etc.). I wouldn't consider fielding a protocol design without sanity-checking it using such a tool. Was there an attack against TLS which got past FSA, or did the experts not know about FSA? -- "Curiousity killed the cat, but for a while I was a suspect" -- Steven Wright Security Guru for Hire http://www.lightconsulting.com/~travis/ -><- GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]