I'm a huge fan of Rust, and plan on use it some around GNUnet, but.. It's important to remember that Rust remains immature because they're attempting to do hard stuff well. In particular, they have not yet settled on the "Rust way" to handle key material : https://github.com/rust-lang/rfcs/issues/766
Rust's libsodium bindings automatically call sodium_memzero https://github.com/dnaq/sodiumoxide but do not use libsodium's allocators. Also, Rust did not yet stabilized allocators https://github.com/rust-lang/rfcs/issues/538 so projects trying to do that remain messy. Example : https://github.com/seb-m/tars It's tricky to audit Rust code that employs cryptography until this gets sorted out. At the same time, one should not shy away from writing Rust code that employs cryptography, but you should expect to interact with the Rust language community rather closely, and the Rust code is going to require maintenance. It's more work, not less. On Thu, 2015-07-09 at 22:49 +0800, Andrew Cann wrote: > * side channel attacks > Some things, like the number of CPU cycles it takes to execute this > decrypt() function, could in principle be modeled inside a programming > language. I don't know if any of the dependently typed assembly languages > let you do this. We're not implementing new crypto primitives in GNUnet, but I'll respond anyways : In principle maybe, but in practice the languages I know about use LLVM, including Rust, and LLVM has no plans to support this : https://moderncrypto.org/mail-archive/curves/2015/000466.html https://moderncrypto.org/mail-archive/curves/2015/000470.html Actually that whole thread is interesting. On Rust specifically, see slides 116-117 of this talk : http://files.meetup.com/10495542/2014-12-18%20-%20Rust%20Cryptography.pdf Also, there is a project to produce constant-time code using Rust by avoiding LLVM, but it's quite immature. At present, crypto primitives are commonly written in assembler for these reasons! > * scalability/performance > What if you could guarantee that your service will process any message of > n > bytes in O(n log(n)) time and memory. Or that a network of n available > peers connected in such-and-such a topology can route any message in less > than m hops. There are programming languages that could let you express > these kinds of constraints and check them at compile time. > * disclosure via protcols, meta data leakage > I'm not sure exactly what you have in mind, but if you want to prevent > leakage there are type theories that let you enforce things like "the > value > in this variable at time t can not effect the output of this function at > any future time". This is like when people talk about doing the proof of the Four-Color Theorem or Classification of Finite Simple Groups using computer assisted theorem provers. Any real analysis of scalability or metadata leakage is far beyond where foreseeable computer assisted provers help much. Jeff
signature.asc
Description: This is a digitally signed message part
_______________________________________________ GNUnet-developers mailing list [email protected] https://lists.gnu.org/mailman/listinfo/gnunet-developers
