----- Original Message ----- From: "Brian J. Beesley" <[EMAIL PROTECTED]> To: "Daran" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Sunday, June 09, 2002 2:18 PM Subject: Re: Mersenne: P-1 Puzzle
> 6972001 and 7076669 are "starred" although the "fact bits" column seems to > indicate that both trial factoring to 2^63 and P-1 have been run. This is > _definitely_ true for P-1 on 7076669, the fact is recorded on my system in > both results.txt & prime.log. It also should be recorded in the worktodo.ini file for that exponent. Now if you were to unreserve that exponent, then contact the server sufficiently late in the day so that it is immediately reassigned back to you (rather than giving you a smaller one), then check the WTD file, then you'll find that the P-1 bit will have been unset. (This is how I discovered the problem in the first place.) There is a small risk of losing the assignment, if someone checks it out seconds after you unreserve it. > So far as 6972001 is concerned, the database > (dated 2nd June) indicates P-1 has been run to a reasonable depth but > trial factoring has only been done through 2^62. My system definitely > won't have done any more trial factoring yet, let alone reported anything, > since that system is set up with v22 defaults i.e. defer factoring on new > assignments until they reach the head of the queue. I'll bet its worktodo.ini entry also has the P-1 bit unset, meaning that you will do a redundant P-1 unless you manually fix it. What does it give for the factored bits? 62, or 63? > 7009609, 7068857 & 7099163 are not "starred" although the "fact bits" > column is one short. The "nofactor" & "Pminus1" databases (dated 2nd July) > give these all trial factored through 2^62 & Pminus1 checked to B1=35000, > B2=297500 (or higher). The P-1 limits seem sensible for DC assignments, > but shouldn't these have been trial factored through 2^63 like most of the > other exponents in this range? Again, what does your WTD file say? [...] > I wonder what happens if you're working like Daran and someone returns a > P-1 result "independently" (either working outside PrimeNet assignments, > or perhaps letting an assigned exponent expire but then reporting results); Or if someone is working like me who hasn't identified the problem. If they unreserve an exponent whose P-1 hasn't been recognised by the server, then the next owner will do another one, with possibly different limits. This appears to have happened to me at least once. I'll spend some time later today cross-referencing my WTD file against pminus1.txt to see if there are any more I don't need to do. > This is not trivial; e.g. if you get "no factors, B1=100000, B2=1000000" > and "no factors, B1=200000, B2=200000" there might still be a factor > which would be found if you ran with B1=200000, B2=1000000. Also, if > the database says that P-1 stage 1 only has been run (probably due to > memory constraints on the system it ran on), at what point is it worthwhile > running P-1 for the possible benefit of finding a factor in stage 2? That question generalises. If the database shows that a shallow P-1 has been run, at what point is it worth running a deeper one, and with what limits? Suppose the a priori optimal bounds for an exponent are, for example B1=40000,B2=600000, but it turns out that only stage 1 has been run to 40000. Assuming that it's worth re-running the P-1 at all, it might be better to drop the B1 limit - to 35000, say, and increase the B2 limit. This would reduce the factor-space overlap with the first test. What's missing in all this is any kind of quantitative analysis. In any case, as long as there are exponents which haven't had a P-1 at all, I prefer to stick to them. [...] > Daran, my advice would be to concentrate on exponents above the current DC > assignment range which have already been LL tested but not P-1 checked, or > on exponents above the current LL assignment range which have been trial > factored but not P-1 checked, according to the official database (updated > weekly - you will need the pminus1, nofactor & hrf3 database files, plus > the "decomp" utility to unscramble nofactor). I have these files, but following your advice would undermine my rational for doing this. With 512MB I probably have considerably more available memory than the average machine doing DCs now. That will be less true for machines doing DCs in the future, and probably not true at all for machines doing first time tests. I can work around the problem while staying within the current DC range. It's just an irritation. > Regards > Brian Beesley Daran _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
