On December 5, 2003 07:17 am, Dr. Stephen Henson wrote:
> I had a few, distinct, ideas on that myself and I heard some of Geoff's
> too.
>
> The sieve part of OpenSSLs prime generation at least is not very
> efficient and could be enhanced in a number of ways. Bet someone's
> patented some though :-(

Well, I think the important question is the one you already raised in an 
earlier post - what are the keys being generated *for*? This affects how 
quickly you can generate them, because apart from some calculation 
overheads, the majority of time is spent finding appropriately random 
primes and so any genuine improvements in key-generation speed will 
probably depend most heavily on what requirements exist for those primes 
relative to one another.

Eg. it is obviously unsatisfactory in the general case to generate a 
series of candidate primes (for building a sequence of keys) using a 
single sieve. But in particular cases, eg. short runs of ephemeral 
key-exchange keypairs, you could amortise the cost of a *good* sieve by 
doing it over an arithmetic sequence of arbitrary/random step size and 
using that to generate the primes for multiple keys.

This approach if properly tuned could also aid the general one-off case of 
an RSA keygen, and IIRC this is what we were discussing in the past (ie. 
a large arithmetic sequence sieve allows you to generate loads of 
candidates to avoid resieving each time one of them fails primality 
testing). This idea came up years ago during a chat with Peter Gutmann in 
a grotty smoke-filled carpark - and although Peter has since implemented 
this in cryptlib, I'm not sure the matter has been discussed in cleaner 
less-petrol-saturated air. :-) In fact, there was some scratch code for 
messing with this in CVS (openssl-play/geoff), but it was already pretty 
horrid back when I did it and I suspect it will have bit-rotted since 
then. But this is a question of tuning and benchmarking implementations 
to get a fixed improvement in the speed of single RSA keygens. The 
question asked seems to open a wider discussion, which is that perhaps 
the requirement to generate a *bank* of RSA keys could admit various 
other kinds of speedups, where producing all the key-material "en masse" 
could scale better than doing individual key-generations in a loop.

And for that, you need to know what the keys are being generated for, and 
what threats have to be resisted. Eg. if the public keys will live 
indefinitely and could be collected together over time (rather than being 
one-off handshake keys on a per-connection basis, for example), then it's 
far less interesting to have *any* arithmetic relationship between 
sequences of keys, even if only in small batches.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to