On Sun, Mar 09, 2003 at 08:48:44PM +0000, Brian J. Beesley wrote: > On Sunday 09 March 2003 12:24, Daran wrote: > > > In the hope of more quickly collecting data, I have also redone, to 'first > > time test' limits, every entry in pminus1.txt which had previously done to > > B1=B2=1000, 2000, and 3000. For these exponents, all in the 1M-3M ranges, > > the client was able to choose a plan with E=12. Unfortunately, I found far > > fewer factors in either stage 1 or stage 2 than I would expect, which > > suggests to me that exponents in this range have had additional factoring > > work (possibly ECM) not recorded in the file. > > 1) What about factors which would be found with your P-1 limits but happened > to fall out in trial factoring? (In fact a lot of the smaller exponents - > completed before P-1 was incorporated in the client - seem to have been trial > factored beyond the "ecomonic" depth.) In any case, if you're using very > small values of B1 & B2, I would _expect_ that a very high percentage of the > accessible factors will be found during "normal" trial factoring.
They all had a *recorded* factoring depth of 57 bits. Whether that's beyond "economic" depth, I don't know. The client reported a probability of finding a factor at about 3% to my recollection. For some of the smaller exponents, the bounds looked silly, so I reduced the factored bits to 53, which I would have expected to increase this probability somewhat, albeit not to the 6% the client then reported (again, to my recollection). Thus with very few exceptions, all were done with B1 at least 20000 and B2 at least 395000, and many with B1=30000 and B2 > 500000 Out of 1179 exponents tested, I found no stage 1 factors and 10 stage 2 factors. > 2) It would not surprise me at all to find that there is a substantial amount > of P-1 work being done which is not recorded in the database file. I've also > had "very bad luck" when extending P-1 beyond limits recorded in the database > file for exponents under 1 million. Eventually I gave up. As have I. > 3) ECM stage 2 for exponents over 1 million takes a serious amount of memory > (many times what P-1 can usefully employ), whilst running ECM stage 1 only is > not very efficient at finding factors - lots of the power of ECM comes from > the fact that stage 2 is very efficient (assuming you can find memory!) As a matter of interest, how much memory is 'sufficient' for ECM stage 2 on a 1M exponent? > > Of particular concern is the > > possibility that in addition to reducing the number of factors available > > for me to find, it may have upset the balance between 'normal' and > > 'extended' P-1 factors - the very ratio I am trying to measure. > > One way to deal with this would be to deliberately forget previously reported > work, i.e. take _all_ the prime exponents in the range you're interested in, > trial factor to taste then run P-1. This way you can be sure that, though the > vast majority of the factors you will find are rediscoveries, the > distribution of the factors you find is not distorted by unreported negative > results. Well I could avoid the TF stage (which could take longer than the P-1 itself), by excluding exponents with known factors smaller than the TF depth. However I prefer to avoid duplicating work at all, if I can help it. > Regards > Brian Beesley Daran _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers