Using Yarrow in /dev/random
I've been working to change the implementation of /dev/random over to the Yarrow-160a algorithm created by Bruce Schneier and John Kelsey. We've been working on parallel development for Linux and NT so that the algorithms are matching. The Yarrow 160A algorithm is a variant of Yarrow-160 that has come about from discussions with John Kelsey. We've been in contact with him throughout our development effort. In any case, this requires use of a hash function (sha1) and a block cipher (3des). We were going to do a replacement of /dev/random (it's nearly finished) but in retrospect, it seemed that I hadn't looked into the current state of incorporating crypto into the kernel. If anyone has any suggestions, comments, questions, please email. Also, does anyone have any complaints against incorporating a new /dev/random into the kernel? pravir chandra. Reliable Software Technologies www.rstcorp.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Using Yarrow in /dev/random
> Why? What's wrong with the current implementation. And more important > still: How well-known is Yarrow160A? I cannot find it in my copy of > [Schneier96], so it is probably not older than four years. much of yarrow-160a has been specified by kelsey himself in discussions with people at Counterpane and Reliable Software Technologies. > _Please_ use the crypto api. It provides for a cipher and a digest(hash) > api. sha1 is implemented and functional (AFAICS), but 3des will have to > be converted to use the new api. That is not hard. If it does not fit > your needs, try convincing astor to make changes. It's really time that > the crypto api gets used by more than loopvack crypto, esp. now that it > is distributed on ftp.*.kernel.org. we had full intentions of employing the crypto api from the international kernel patch, but is the int-kernel patch going to be incorporated into the main kernel build files or remain as a patch? if it is always going to be a patch, then random number generation that relies on code that's not there seems to be faulty. in any case, we've designed this "yarrow-160a" generator to be completely independent of hash or cipher used, and we also plan on having full config options on choosing you weapon. 3des and sha1 were only from the specs in the paper and our reference impl uses those. it was thought that since des was quite an old algorithm, the chances of it being seriously broken are slim to none. in my opinion, i don't think that the actual cipher or hash makes too much of a difference (so long as the cipher is strong). > Do you mean /rev/random or /dev/urandom? well, the yarrow accumulator seems to fit quite well with the current purpose of /dev/random and the accumulator/generator combo will do nicely for /dev/urandom. i think overall this will improve the cryptographic strength of numbers you get from /dev/urandom and /dev/random will essentially be a portal for entropy. pravir chandra. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Using Yarrow in /dev/random
> I'm not a big fan of Yarrow, since it (in my opinion) places too much > faith in the crypto algorithms. It uses a pathetically small entropy > pool, and assumes that hash function will do the rest. Which is fine, > but that makes it a pseudo-RNG, or a crypto-RNG, and not really an > entropy collector. That's not necessarily a bad thing, but IMO that > should live outside the kernel. There's no real point putting something > like Yarrow in kernel space. Better that it collects entropy from a > true entropy collector that's in-kernel (like the current /dev/random), > and that it does the rest of the processing in user space. After all, > the main rason why /dev/random is in the kernel is precisely because > that's the most efficient place to collect as much entropy as possible. i agree that the yarrow generator does place some faith on the crypto cipher and the accumulator uses a hash, but current /dev/random places faith on a crc and urandom uses a hash. i also agree that the entropy pools are small, but the nature of the hash preserves the amount of entropy that has been uses to create the state of the pools. basically, if the pool size is 160 bits (hash output) its state can be built by more than 160 bits of entropy, its just that adding entropy after that increases the unguessability (conventional attacks) of the state but brute forcing the state is still 2^160. regardless of any of this, putting yarrow in user space is problematic from an entropy collection standpoint. yarrow accumulator needs multiple entropy sources in order to meet certain criteria in the algorithm for pool stirring and reseeding. reading the mux'd data from /dev/random is really too late for yarrow to discern where each bit of entropy comes from. > If you look at the current 2.4 /dev/random code, you'll see that there > already are two entropy pools in use, which is designed to be able to do > something like the "catastrpohic reseed" features which Bruce Schneier > is so fond of. What's currently there isn't perfect, in that there's > still a bit too much leakage between the two pools, but I think that > approach is much better than the pure Yarrow approach. i took a brief look at the new code. if i understand correctly, the general consensus is to have /dev/random be essentially a gateway for entropy and /dev/urandom to be a cryptographically strong prng. i think this fits with yarrow, in that the accumulator can provide entropy and the generator can provide strong prn's. our plan was to keep the accumulator and one generator in the kernel and to provide a library to create n generators in user space so as to help simulation. pravir chandra. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/