Re: Mersenne: P-1 on PIII or P4?

2003-03-09 Thread Daran
On Fri, Mar 07, 2003 at 09:51:33PM -0800, Chris Marble wrote:

> Daran wrote:
> > 
> > On Thu, Mar 06, 2003 at 08:12:31PM -0800, Chris Marble wrote:
> > 
> > > Daran wrote:

> I like my stats but I could certainly devote 1 machine out of 20 to this.

If you're going to use one machine to feed the others, then it won't harm
your stats at all.  Quite the contrary.

> Assume I've got 1GB of RAM.  Do the higher B2s mean I should use a P4 rather
> than a P3 for this task?

I don't know, because I don't know why it gets a higher B2.

B1 and B2 are supposed to be chosen by the client so that the cost/benefit
ratio is optimal.  Does this mean that P4s is choose B2 values which are too
high?  Or does everything else choose values too low?  Or is there some
reason I can't think of, why higher values might be appropriate for a P4?

In fact, I'm not even sure it does get a higher B2 - the apparent difference
could be, as Brian suggested, due to differences between versions.  I don't
have access to a P4, so I can do any testing, But I'd appreciate it if you
or someone else could try starting a P-1 on the same exponent (not in one of
the ranges where it would get a different FFT length) on two different
machines, with the same memory allowed.  You would not need to complete the
runs.  You could abort the tests as soon as they've reported their chosen
limits.

> Would I unreserve all the exponents that are already P-1 complete?
> If I don't change the DoubleCheck into Pfactor then couldn't I just let
> the exponent run and then sometime after P-1 is done move the entry and
> the 2 tmp files over to another machine to finish it off?

If you're going to feed your other machines from this one, then obviously
you won't need to unreserve the exponents they need.  But there's an easier
way to do this.  Put SequentialWorkToDo=0 in prime.ini, then, so long as it
never runs out of P-1 work to do, it will never start a first-time or
doublecheck LL, and there will be no temporary files to move.  I also
suggest putting SkipTrialFactoring=1 in prime.ini.

> That sounds like more work than I care to do...

I agree that with 20 boxes, the work would be onerous.

> ...I can see having 1 machine
> do P-1 on lots of double-checks.

That would be well worth it.  Since one box will *easily* feed the other
twenty or so, you will have to decide whether to unreserve the exponents you
P-1 beyond your needs, or occasionally let that box test (or start testing)
one.

You may find a better match between your rate of production of P-1 complete
exponents, and your rate of consumption, if you do first-time testing.

[...]

> As an mprime user I edit the local.ini file all the time.  Per your notes
> I upped *Memory to 466.

That will certainly help exponents below 9071000 on a P3, or 8908000 on a P4. 
The current DC level is now over 917, so I doubt this will help much,
(though of course, it won't harm, either).  I haven't tried.  I'm still
getting enough sub 9071000 expiries.

> -- 
>   [EMAIL PROTECTED] - HMC UNIX Systems Manager

Daran G.
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: E=12 on P4

2003-03-09 Thread Anurag Garg

Chris,
You can avoid a lot of this manual labor by just putting this
line in prime.ini

   SequentialWorkToDo=0

Read undoc.txt to see how it works. What this will do is, force P-1 to
be performed first.
Anurag

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: E=12 on P4

2003-03-09 Thread Chris Marble
Chris Marble wrote:
> 
I set up a P4 with 1GB RAM to grab some DCs.  All need P-1 run.  I cranked up
the memory available in local.ini and it's running with E=12.  The 1st exponent
completed.  It finished P-1 on the second in just over 3 hours - also with E=12.
I killed the mprime, moved the exponent to the end of Worktodo.ini and started
mprime back up.  I'll check back in a couple of hours and see if I should
rotate to the next exponent.  Then I will have a hunk of nicely P-1 DCs I can
move off to boxes with out enough RAM for the deep searches.
Sound right?
-- 
  [EMAIL PROTECTED] - HMC UNIX Systems Manager
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: Celeron P4 1.7G - Torture Test

2003-03-09 Thread Nagy Peter

Hi,

My CPU is: Celeron P4 1.7GHz (with 128kB cache)
Motherboard: ASUS P4S333-VM
Memory: 512kB DDR RAM
prime95 version: 23.2.1
OS: Win98 SE

I started a 'Torture Test'. It ran without fault about 8 hours.
After successful execution of 2048K FFT at beginning of
2560K FFT
a message 'Illegal operation executed..' appeared and the
OS stopped the prime95. The OS itself worked without problem.
This effect can was reproducable systematically.

Has anybody a similar problem?
How can I run another test for 2560K FFT?

  Laszlo Megyeri ([EMAIL PROTECTED])


_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Optimal choice of E in P-1 computations

2003-03-09 Thread Brian J. Beesley
On Sunday 09 March 2003 12:24, Daran wrote:

> In the hope of more quickly collecting data, I have also redone, to 'first
> time test' limits, every entry in pminus1.txt which had previously done to
> B1=B2=1000, 2000, and 3000.  For these exponents, all in the 1M-3M ranges,
> the client was able to choose a plan with E=12.  Unfortunately, I found far
> fewer factors in either stage 1 or stage 2 than I would expect, which
> suggests to me that exponents in this range have had additional factoring
> work (possibly ECM) not recorded in the file.

1) What about factors which would be found with your P-1 limits but happened 
to fall out in trial factoring? (In fact a lot of the smaller exponents - 
completed before P-1 was incorporated in the client - seem to have been trial 
factored beyond the "ecomonic" depth.) In any case, if you're using very 
small values of B1 & B2, I would _expect_ that a very high percentage of the 
accessible factors will be found during "normal" trial factoring.

2) It would not surprise me at all to find that there is a substantial amount 
of P-1 work being done which is not recorded in the database file. I've also 
had "very bad luck" when extending P-1 beyond limits recorded in the database 
file for exponents under 1 million. Eventually I gave up.

3) ECM stage 2 for exponents over 1 million takes a serious amount of memory 
(many times what P-1 can usefully employ), whilst running ECM stage 1 only is 
not very efficient at finding factors - lots of the power of ECM comes from 
the fact that stage 2 is very efficient (assuming you can find memory!)

> Of particular concern is the
> possibility that in addition to reducing the number of factors available
> for me to find, it may have upset the balance between 'normal' and
> 'extended' P-1 factors - the very ratio I am trying to measure. 

One way to deal with this would be to deliberately forget previously reported 
work, i.e. take _all_ the prime exponents in the range you're interested in, 
trial factor to taste then run P-1. This way you can be sure that, though the 
vast majority of the factors you will find are rediscoveries, the 
distribution of the factors you find is not distorted by unreported negative 
results.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: Optimal choice of E in P-1 computations

2003-03-09 Thread Daran
Some time ago I raised the question on this list, whether the client's
choice of the E parameter was optimal in P-1 calculations.  I gave a
somewhat handwavy argument in support of the claim that IF it is worthwhile
choosing E=4 over E=2, i.e., if the benefits in additional factors found
outweigh the cost in extra processing time[1], THEN it should also be
worthwhile choosing E=6, and maybe even E=8 or E=12.  I also argued on
empirical grounds that, for D=420 at least, E=4 and E=12 would need to yield
roughly 2% and 10% respectively more stage 2 factors than E=2 for this to be
worthwhile.[2]

>From a theoretical point of view, Peter Montgomery's thesis, as suggested by
Alex Kruppa, is clearly relevant.  (I don't have the URL to hand, but
someone is sure to post it.) Unfortunately it's somewhat beyond my
mathematical ability to grasp.  Therefore, I have concentrated on attempting
to collect empirical evidence, as follows.

I have done a great many P-1s of doublecheck assignments with E=4.  Out of
the 35 stage 2 factors found[3], 1 was 'extended', i.e, would not have been
found using E=2.

In the hope of more quickly collecting data, I have also redone, to 'first
time test' limits, every entry in pminus1.txt which had previously done to
B1=B2=1000, 2000, and 3000.  For these exponents, all in the 1M-3M ranges,
the client was able to choose a plan with E=12.  Unfortunately, I found far
fewer factors in either stage 1 or stage 2 than I would expect, which
suggests to me that exponents in this range have had additional factoring
work (possibly ECM) not recorded in the file.  Of particular concern is the
possibility that in addition to reducing the number of factors available for
me to find, it may have upset the balance between 'normal' and 'extended'
P-1 factors - the very ratio I am trying to measure.  Consequently I am
inclined to exclude these results, though I report them for completeness:

Of the 10 stage 2 factors found, 2 were extended.  They are:-

P-1 found a factor in stage #2, B1=2, B2=395000.
UID: daran/1, M1231753 has a factor: 591108149622595096537
591108149622595096537-1 = 2^3*3*11*743*2689*909829*1231753

P-1 found a factor in stage #2, B1=3, B2=547500.
UID: daran/1, M2008553 has a factor: 9050052090266148529
9050052090266148529-1 = 2^4*3^2*7*71*79*796933*2008553

Finally, with George's permission, I have done a small number of P-1s of
doublechecking assignments with a client modified to use D=420, E=12 - a
plan not available with the standard clients.  So far, I have found only one
stage 2 factor, which was not extended.  I will continue to search for more.

Of particular interest with E=12 extended factors, is whether they would
have been found with a lower value of E.  E=12 will find all factors that
E=4 and E=6 would have found, and some not found by any lower E.  My
handwavy argument predicted that E=6 should yield on average twice as many
extended factors than E=4.  I'm hoping that someone (Alex Kruppa?) might
have a tool to analyse extended factors to determine their minimal E.  If
not, I will write one.

In conclusion, the evidence I have been able to gather, though statistically
insignificant, does not tend to exclude the hypothesis that a higher E would
be worthwhile.

[1]There is also a memory cost, but this is low in comparison with the costs
associated with the D parameter.  For example, for an exponent in the
7779000-9071000 range, in which I am working, D=420, E=4 consumes 446MB, and
because of the client's conservative programming, 465MB must be 'allowed'
before it will choose this plan.  The next level down is D=210, E=4 which
requires 299MB.  Using the modified client with E=12 adds an extra 37MB to
these requirements, which is memory available and going spare if the amount
allowed is between about 350MB and 465MB.

Another way to look at this is to say that there is no memory cost
associated with increasing E for a given value of D.  The memory is either
available, or it is not.

[2]Assuming that the current algorithm for determining optimal B1 and B2
values are accurate, and that this routine would be modified to make it
aware of the costs and benefits of differing values of E.

[3]This total includes both prime components of a composite factor found in
a single P-1 run, since neither was extended.

Regards

Daran
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers