> However, there's another generalisation that occurs to me that would be
> quite cheap to implement.  The purpose of stage two is to catch those
> factors which are 'just out of reach' of stage 1.  However another way a
> factor could be 'just out of reach' would be if the power of exactly one of
> the primes < B1 was exactly 1 more than stage 1 would find.  Stage 2 would
> find these factors *as well* if the algorithm were run for primes in [2,B2]
> instead of [B1,B2]
> 
> Has this been investigated?

Good point. The implementation I described finds a factor f iff the
factorization of ord(a) mod f has only primes and prime powers <= B1 and
at most one prime <= B2. 
It would be more reasonable if it could have primes and prime powers <=
B1 and at most one prime _or prime power_ <= B2. That prime power has,
after all, a probability >=1/B2 of dividing a random integer.

However, that is awful to implement. In stage 1, you exponentiate each
prime p by floor(log_p(B1)). In stage 2, you'd have to include small
primes p <= sqrt(B2), exponentiated by floor(log_p(B2)) -
floor(log_p(B1)).

Actual implementations of the second stage (ex prime pairing) work more
like this, the example in my previous post was simplified:

Choose D a multiple of small primes, i.e. multiple of 2310 or 210 or 30
(as memory allows), D < sqrt(B2-B1)
Store x^D
store the phi(D) different values of x^n, gcd(n,D)=1, in xn[]
p = smallest prime > B1
compute x^(mD), so that p in [mD, (m+1)D]
accu = 1
for each prime p <= B2
  accu *= x^(mD) - xn[n], so that mD-n = p
  p = next prime
  if p > mD then x^mD *= x^D, to go from x^mD to x^(m+1)D
gcd(accu, N)

We don't store all the x^mD, but compute them as we need them, to save
memory. This means we have to process primes in ascending order. 
If we wanted to include powers of small primes, we'd have to do an extra
pass. Since p^(floor(log_p(B2)) - floor(log_p(B1))) is <B1, for B1 >>
B2/B1, we can't sort it into a list with our regular stage 2 primes.
We'd have to make a list of powers of small primes, sort it and process
it separately with the algorithm above.

But: We make D a multiple of small primes so that we have to store fewer
values in xn (if d|D and d|n, then d|D-n so D-n is not prime.) If we
restrict ourselves to (large) primes in stage 2, we can save a lot of
memory in xn[], but it'll mean we can't generate powers of those small
primes d that divide D, and whose multiples n we delibarately left out
of xn[]. We'd have to do yet another pass for those really small
primes, and set up a new xn[] for it, with no multiples of small primes
left out.

If you want to make really sure that all powers of primes <B2 are
included, you can modify stage 1 to do so, i.e. exponentiate each prime
p<B1 by floor(log_p(B2)). This won't take a lot of extra time and is
*much* easier to implement.

Alex
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to