On Tuesday May 9 2006 11:12, Orr Dunkelman wrote: > According to what they claim, the source code was undocumented, and they > had to work hard to make it into a "readable" pseudo-code. > > It reminds me a time I had to "reverse engineer" a circuit diagram I got. > Took me hours just to understand what the machine does (and I had the > circuit diagrams).
Looks like every time my favorite mailing list, or my personal address is picking up another strain of MyDoom, Beagle, or any other MS pandemic, and I am in my free time and curiosity digging through mail headers, or some poorly-written _undocumented_ code which is always a copy of another months-old once-0day IE exploit, excluding the comments, with the payload slapped in, I am actually reverse-engineering... Who would have thought.... Clearly and objectively, I am a uneducated newb when it comes to kernel and security. I may miss some points here and there. I didn't read all the sources the authors reference to. However, I've read the paper and didn't get my revelation. The "Why reverse-engineering the LRNG is not easy" part left me thinking about the decisions that were made by the authors. I cannot confirm the hours of rebuild and installation on every small kernel change claim, neither the claim about undocumented, unreadable code. The short excourse into RNG internals was highly educational, however almost all practical attacks on the algorithm revolve around the security classic - running out of entropy eventually. I agree completely with the claim that feeding the entropy pool off the system state itself is foolish, at least theoretically, but authors completely ignore the fact that anyone serious enough will feed the pool off hardware generator(s) anyway, the existence of the projects that provide this easy to set up feature, just for example: http://www.av8n.com/turbid/ For the less security aware, there is the kernel support for hardware generators on the motherboard in the current kernel that is about as hard to get as running "make menuconfig" and enabling an option. (Well, maybe they miss it because they analyze the kernel source snapshot of December 2004, can anyone confirm?) Apparently, the whole issue is not "Linux PRNG is faulty" but "OSS is not so secure!". Isn't that the old "OSS is less secure because everyone can see the security hole" FUD, raising it's head every once and so often? A bleak eleventh pirate copy of a copy of "Linux ate my data/hard drive/neighbor" on fresh steroids, only able to cause a stir among the ignorant? Is it because the "A hole discovered in MS Doors" have about the same chance of making a newspaper hard-sell headline as "A rain expected in Haifa this Wednesday", but finding a dirty spot on some fresh player's clothes is such a exciting little game? Even if it's the same spot, over and over and over again? What I can't figure out is how the fact that just about any teenager is able to spot the security hole in your closed-source program, provided that our average Joe managed it through two months of reading software cracking tutorials and another month of "exploiting for dummies". How that fact can provide a false sense of security to anyone is beyond my understanding. -- Aggravated, Michael Vasiliev "We must not put mistakes into programs because of sloppiness, we have to do it systematically and with care." -- Attributed to Edsger Wybe Dijkstra -------------------------------------------------------------------------- Haifa Linux Club Mailing List (http://www.haifux.org) To unsub send an empty message to [EMAIL PROTECTED]