On Sat, 2008-01-12 at 23:00 +0800, Drexx Laggui [personal] wrote: > I agree with your opinion. Most of us do not have threats from > organizations with massive computing resources, so /dev/urandom is > good enough. /dev/random is stronger, but it just takes forever! > /dev/urandom is becomes practical when you realize that you have > project deadlines to meet, and a business to support.
well, as you noticed in your test, /dev/urandom still is expensive. badblocks probably just uses a cheaper source of randoms. > I had taken for granted the thought that everybody encrypts their > HDDs, i just recently switched to using an encrypted /home (and external/backup drives). not yet encrypting / or swap though. i'll look into those maybe when i upgrade to Hardy Heron. > hence /dev/urandom and not /dev/zero. > Anyway, if the HDD was over-written only with zeros, then you > encrypt the HDD, one could see with a hex editor where the > filesystem started, and then theoretically try to reverse-engineer > the encryption algorithm. Remember, the bigger the fs, it's > theoretically easier to decrypt it. ok, i get it. i was confused. yeah, known plaintext with large known plaintext does make the attack easier (well, theoretically, i don't know the math, i've only read Schneier and come away with some basic conclusions :-). in C, rand() can generate 512*256001 bytes in 0.02s (on my laptop). doing some basic memory handling (you need to copy the integers generated into a memory buffer to write to the file, but without doing any IO yet) takes the time to either 2 secs (using the whole int generated by rand()) or 8 secs (using just the lowest byte, so that the logic is simpler). at a guess that's what badblocks is doing (using rand()). or if it isn't, then you can do better than badblocks. tiger _________________________________________________ Philippine Linux Users' Group (PLUG) Mailing List [email protected] (#PLUG @ irc.free.net.ph) Read the Guidelines: http://linux.org.ph/lists Searchable Archives: http://archives.free.net.ph

