>>> Why factors prime95 only up to 59 bit at exponnent which are for
>>> doublechecking?
>>>
>>> The other (9million) are up to 62 bit.
>>>
>>> Why is that so?
>
>> Because the large the number, the more factoring is worth doing before
>> changing over to the computationally expensive LL test.
Let the number to be tested be 2^n-1, p(k) be the probability of discovering a
factor by trial-factoring to k bits, t(k) be the time taken to trial-factor to k
bits and T be the time taken to run a LL test.
If what we want to do is to determine the primality or otherwise of 2^n-1, what we
want to do is minimise t(k) + (1-p(k))*T(k) ; if we find a factor, there is
clearly no point in running a LL test.
Now, in broad terms and ignoring irregularities caused by hardware:-
(a) t(k) is proportional to 2^k : there are as many candidate factors between
2^k+1 and 2^(k+1) as there are between 1 and 2^k;
[Actually there are distinct "steps" in this relationship at k=62, k=63 & k=64
caused by the number of significant bits to which the result of a floating-point
operation is accurate]
(b) t(k) is proportional to 1/n : since all the candidate factors are of the form
2an+1 for some integer a, there are *less* candidates for larger values of n;
(c) p(k) is proportional to some increasing function of k - possibly approximately
logarithmic, but clearly less than linear;
(d) T(k) is proportional to n^2.
[Actually the Prime95 implementation proceeds linearly between steps caused by
jumps in the FFT transform size. Theoretically there would be a linear
relationship between the iteration time and the FFT transform size; but the FFT
transform sizes are not equally efficient, exact powers of 2 are inherently more
efficient than intermediate sizes; and the proportion of L2 cache misses tends to
increase with the FFT transform size]
Putting all this together and stirring in a bit of empirical evidence, it has been
found that the best balance is found when t(k)/T(k) is a bit less than 0.1.
>I had wondered if it wouldn't be worthwhile to run additional factoring
>tests on the smaller exponents, taking them up to 62bit (or more) factoring,
>see if we can't find any factors of those numbers that are sitting between
>~59bit and ~62bit...just for fun.
Just for fun, why not? But it will probably take longer to add 4 bits to the
factoring depth of any particular exponent (15 times as much time as has already
been spent trial-factoring) as it would to run the LL test again. You will also
need a copy of Prime95 which has been "doctored" to allow factoring to a greater
than normal depth. And, though you will certainly find *some* factors, there will
be a large proportion of negative outcomes.
Also, as the size of the factors increases, probabilistic methods of finding
factors (ECM, P-1) become relatively much more efficient than exhaustive
elimination of factors. At what point they become "cost effective", I'm not sure;
but it's certainly unrealistic to suppose that, e.g., M727 would ever be factored
by trial division, whereas other methods will almost certainly succeed some time
in the next few years.
Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm