On 07/25/2013 01:37 AM, carlo von lynX wrote: >> encryption via automatic key discovery/validation, (2) enforced StartTLS > > if the key isn't the address, there is no safe way to perform key validation. > x.509 is a failure, you can't trust it.
Again, a straw man argument. yes, x.509 is a failure, but there are other ways to perform key validation. I gave you two: DANE and Nicknym (https://leap.se/nicknym) > even if you starttls, you are still making direct links from sending to > receiving server. there are two bugs here: (1) the path and meta data is > exposed, (2) servers get to have an important role which is bad because > servers are prone to prism contracts. I am beginning to suspect you are just trolling me now. Obviously, starttls alone does not solve these problems, that is why I said the solution requires opportunistic encryption of content and meta-data resistant routing. >> (3) meta-data resistant routing. There are a couple good proposals on >> the table for #1, postfix already supports #2 via DANE, and there are >> four good ideas for #3 (auto-alias-pairs, onion-routing-headers, >> third-party-dropbox, mixmaster-with-signatures [1]). > > as long as it is backwards compatible to plain old unencrypted email > we are unnecessarily risking downgrade attacks. also we are exposing > our new safe mail system to st00pid spam problems of the past. No and no. There are lots of ways to prevent downgrade attacks and lots of ways to prevent spam. > well, email is getting replaced today - and i > don't want to be on the side of the ones getting replaced. email usage is a lower percentage of messages, but absolute email traffic is still growing. email is not going away for a very long time. >> [1] details on ideas for meta-data resistant routing in a federated >> client/server architecture > > fine, but the federated client/server architecture is unnecessary and > servers are always prone to getting tapped. if you make servers > sufficiently dumb then they're essentially just some more nodes in > the network and there is no technical reason to distinguish clients > and servers much. another way of saying this is that successful peer-to-peer networks follow a power law distribution and effectively look much like a federated architecture except with no one responsible for keeping the lights on and with really poor support for mobile devices and devices with intermittent network access. so, yes, my goal is federated client/server where the servers are dumb. by doing this we gain a lot, including organizations responsible for maintain the health of the network, data availability and backup for users, and high functionality on devices on bad networks or limited battery, and (most importantly) the potential for more human friendly secure identity. I suspect you will continue to claim, as you have many times in the past, that federated models are inherently insecure. There is simply no basis for this claim, and the more you make it the less credible you seem. So, please stop making this claim. We both share the same long term goal, and we both think that eventually peer to peer architectures will get us there. We disagree on the schedule, in that I think federated approaches are better for the immediate future and you think peer-to-peer approaches are the only way. Fine, reasonable people can disagree, but there are real trade offs to each approach [1], and by refusing to acknowledge the trade offs you are making you do a disservice to the cause we share. -elijah [1] https://leap.se/en/infosec
