On Tue, May 27, 2014 at 08:23:29AM +0200, Otto Moerbeek wrote: > On Tue, May 27, 2014 at 05:23:45AM +0000, mancha wrote: > > > On Mon, May 26, 2014 at 09:01:53PM +0000, mancha wrote: > > > On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote: > > > > On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote: > > > > > > > > > For our purposes, the operative question is whether the > > > > > distribution bias created can be leveraged in any way to > > > > > attack factoring (RSA) or dlog (DH). > > > > > > > > The maximum gap between primes of size $n$ is conjectured to be > > > > around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, > > > > with an average value of $k$. Thus the most probable primes are > > > > most $k$ times more probable than is typical, and we lose at > > > > most $log(k)$ bits of entropy. This is not a problem. > > > > > > One consequence of the k-tuple conjecture (generally believed to > > > be true) is that the size of gaps between primes is distributed > > > poisson. > > > > > > You're right when you say the entropy loss between a uniform > > > distribution to OpenSSL's biased one is small. In that sense there > > > is not much to be gained entropy-wise from using a process that > > > gives uniformly distributed primes over what OpenSSL does. > > > > > > However, if a way exists to exploit the OpenSSL distribution bias, > > > it can be modified to be used against uniformly distributed primes > > > with only minimal algorithmic complexity increases. In other > > > words, the gold standard here isn't a uniform distribution. > > > > > > --mancha > > > > This is probably more wonkish than Ben intended with his question > > but for those interested, the Poisson result I alluded to is due to > > Gallagher [1]. > > > > [1] Gallagher, On the distribution of primes in short intervals, > > Mathematika, 1976 > > Would this work: if you are worried the algorithm will never pick the > highest of a prime pair, just make it search backward half of the > time? > > But I understand it has no real security implications. > > -Otto
The issue is not limited to twin primes though that extreme drives the point home. In the twin case {p,p+2}, OpenSSL only finds p+2 if p+1 or p+2 happens to be the randomly selected start point. So, the proportion of primes OpenSSL finds that are twins will be significantly lower than theory predicts. With OpenSSL's incremental search, the probability a particular probable prime p is selected is proportional to the length of the gap of composites which immediately precedes it. Brandt and Damgard [1], from what I can tell the fathers of the incremental search OpenSSL uses, share Viktor Dukhovni's view and use an entropy argument to conclude little or no additional risk exists with incremental searches relative to uniformly distributed prime generation. Mihailescu (of Catalan Conjecture fame) establishes a complexity equivalence class to argue improved attacks against an incremental search can be converted to attacks against uniformly distributed prime generation with comparable runtimes [2]. For Mihailescu, incremental search bias is "tolerable", especially in light of the lower entropy costs and efficiency gains relative to naive discard & repeat. The improvements he models are, in essence, improvements in the state-of-the-art not the result of leveraging bias. Fouque and Tibouchi [3] offer the differing view that it's preferable to minimize bias and generate primes that are almost uniform "even if it is not immediately clear how such biases can help an adversary". They suggest a few algorithms that improve on naive discard & repeat by discarding only the top N bits of a candidate at each iteration, among other innovations. --- [1] Brandt and Damgard, On Generation of Probable Primes by Incremental Search, 1988 ]2] Mihailescu, Measuring the Cryptographic Relevance of Biased Public Key Distributions, 1998 [3] Fouque and Tibouchi, Close to Uniform Prime Number Generation With Fewer Random Bits, 2011
pgpFbAsgV12x4.pgp
Description: PGP signature