Re: [Fwd: Mersenne: P-1 Stage 2]

2001-12-15 Thread Daran

- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, December 15, 2001 2:05 AM
Subject: Re: [Fwd: Mersenne: P-1 Stage 2]

  However, there's another generalisation that occurs to me that would be
  quite cheap to implement.  The purpose of stage two is to catch those
  factors which are 'just out of reach' of stage 1.  However another way a
  factor could be 'just out of reach' would be if the power of exactly one
of
  the primes  B1 was exactly 1 more than stage 1 would find.  Stage 2
would
  find these factors *as well* if the algorithm were run for primes in
[2,B2]
  instead of [B1,B2]

Actually that really ought to be [2,~B2/B1] and then [B1,B2].  Another power
on top of a prime in [~B2/B1,B1] will take it above B2.  Since B2 is
typically only an order of magnitude greater than B1 this is not going to
make a lot of difference.

  Has this been investigated?

 Good point. The implementation I described finds a factor f iff the
 factorization of ord(a) mod f has only primes and prime powers = B1 and
 at most one prime = B2.
 It would be more reasonable if it could have primes and prime powers =
 B1 and at most one prime _or prime power_ = B2. That prime power has,
 after all, a probability =1/B2 of dividing a random integer.

This is a generalisation of the above.

 However, that is awful to implement. In stage 1, you exponentiate each
 prime p by floor(log_p(B1)). In stage 2, you'd have to include small
 primes p = sqrt(B2), exponentiated by floor(log_p(B2)) -
 floor(log_p(B1)).

Except that for primes above ~B2/B1, this will be zero, so we could ignore
them.

[...]

 If you want to make really sure that all powers of primes B2 are
 included, you can modify stage 1 to do so, i.e. exponentiate each prime
 pB1 by floor(log_p(B2)). This won't take a lot of extra time and is
 *much* easier to implement.

I agree.  The question is, which algorithm gives us the greatest factors to
work ratio?

 Alex

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: [Fwd: Mersenne: P-1 Stage 2]

2001-12-14 Thread Alexander Kruppa


 However, there's another generalisation that occurs to me that would be
 quite cheap to implement.  The purpose of stage two is to catch those
 factors which are 'just out of reach' of stage 1.  However another way a
 factor could be 'just out of reach' would be if the power of exactly one of
 the primes  B1 was exactly 1 more than stage 1 would find.  Stage 2 would
 find these factors *as well* if the algorithm were run for primes in [2,B2]
 instead of [B1,B2]
 
 Has this been investigated?

Good point. The implementation I described finds a factor f iff the
factorization of ord(a) mod f has only primes and prime powers = B1 and
at most one prime = B2. 
It would be more reasonable if it could have primes and prime powers =
B1 and at most one prime _or prime power_ = B2. That prime power has,
after all, a probability =1/B2 of dividing a random integer.

However, that is awful to implement. In stage 1, you exponentiate each
prime p by floor(log_p(B1)). In stage 2, you'd have to include small
primes p = sqrt(B2), exponentiated by floor(log_p(B2)) -
floor(log_p(B1)).

Actual implementations of the second stage (ex prime pairing) work more
like this, the example in my previous post was simplified:

Choose D a multiple of small primes, i.e. multiple of 2310 or 210 or 30
(as memory allows), D  sqrt(B2-B1)
Store x^D
store the phi(D) different values of x^n, gcd(n,D)=1, in xn[]
p = smallest prime  B1
compute x^(mD), so that p in [mD, (m+1)D]
accu = 1
for each prime p = B2
  accu *= x^(mD) - xn[n], so that mD-n = p
  p = next prime
  if p  mD then x^mD *= x^D, to go from x^mD to x^(m+1)D
gcd(accu, N)

We don't store all the x^mD, but compute them as we need them, to save
memory. This means we have to process primes in ascending order. 
If we wanted to include powers of small primes, we'd have to do an extra
pass. Since p^(floor(log_p(B2)) - floor(log_p(B1))) is B1, for B1 
B2/B1, we can't sort it into a list with our regular stage 2 primes.
We'd have to make a list of powers of small primes, sort it and process
it separately with the algorithm above.

But: We make D a multiple of small primes so that we have to store fewer
values in xn (if d|D and d|n, then d|D-n so D-n is not prime.) If we
restrict ourselves to (large) primes in stage 2, we can save a lot of
memory in xn[], but it'll mean we can't generate powers of those small
primes d that divide D, and whose multiples n we delibarately left out
of xn[]. We'd have to do yet another pass for those really small
primes, and set up a new xn[] for it, with no multiples of small primes
left out.

If you want to make really sure that all powers of primes B2 are
included, you can modify stage 1 to do so, i.e. exponentiate each prime
pB1 by floor(log_p(B2)). This won't take a lot of extra time and is
*much* easier to implement.

Alex
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers