Re: Mersenne: please recommend a machine
whooops. I just sent a reply to 'Spike66's query on new machines to a completely differnet list by accident. Total brainfart. here's what I meant to post here > Some of you hardware jockeys please give me a > clue. I have two machines at home running GIMPS 24-7. > One is a P4-2Ghz. The other is a 5 yr old 350 Mhz > PII, which is in need of a tech refresh. Clearly > there is more to computer performance than clock > speed, but for GIMPS I suppose clock speed is > everything. Is it? My other machine already has > a DVD writer, networked etc, so I need not rebuy > that. What should I buy? I have no hard spending > limit, but I am looking for value and suppose that > a thousand dollar machine would be more than > adequate. Does AMD vs Intel matter? Does bus > speed matter? The P4's significantly outperform currently available AMD processors in Mersenne due to SSE2 instructions. AMD CPUs are faster at some things at the same clock speeds, however, P4's are available at much higher clock speeds which more than compensates. Also, the Intel motherboard chipsets tend to have better implementations of PCI, AGP, with higher IO throughput and better compatibility. my current preferred machine on a balanced cost/performance basis is as follows... P4-2.53Ghz (533Mhz bus, retail version w/ Intel heatsink) Asus or Intel brand motherboard based on i845pe chipset with onboard LAN and Sound 512MB-1GB of 333Mhz DDR ram (aka PC2700) NVIDIA Geforce4 Ti4200 Seagate Baracuda ATA IV or V, 80 or 120GB. Toshiba 16X DVD-ROM TDK 48X CD-RW dropped into a nice solid 'aluminum' chassis such as an Antec or Lian Li with temperature controlled 80mm 'whisper' fans, and a quality 300W or so power supply. My current computer is way too noisy, I want to rectify that. note, I haven't actually BOUGHT this machine yet, but I'm getting closer. Also, Intel CPU prices are dropping again as they just made a wholesale price adjustment, so its quite possible the cost/performance point is about to move up another notch to 2.66Ghz. _ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
Re: Mersenne: P-1 on PIII or P4?
Daran wrote: > > On Thu, Mar 06, 2003 at 08:12:31PM -0800, Chris Marble wrote: > > > Daran wrote: > > > > > > Whichever machine you choose for P-1, always give it absolutely as much > > > memory as you can without thrashing. There is an upper limit to how much it > > > will use, but this is probably in the gigabytes for exponents in even the > > > current DC range. > > > > So I should use the PIII with 1 3/4 GB of RAM to do nothing but P-1. > > This depends upon what it is you want to maximise. If it's your effective > contribution to the project, then yes. I like my stats but I could certainly devote 1 machine out of 20 to this. Assume I've got 1GB of RAM. Do the higher B2s mean I should use a P4 rather than a P3 for this task? > Unfortunately, pure P-1 work is not supported by the Primenet server, so > requires a lot of ongoing administration by the user. > Alternatively, if the server is currently making > assignments in your desired range, then you could obtain them by setting > 'Always have this many days of work queued up' to 90 days - manual > communication to get some exponents - cut and past from worktodo.ini to a > worktodo.saved file - manual communication to get some more - cut and past - > etc. This is what I do. > > The result of this will be a worktodo.saved file with a lot of entries that > look like this > > DoubleCheck=8744819,64,0 > DoubleCheck=8774009,64,1 > ... > > (or 'Test=...' etc.) Now copy some of these back to your worktodo.ini file, > delete every entry ending in a 1 (These ones are already P-1 complete), > change 'DoubleCheck=' or 'Test=' into 'Pfactor=', and change the 0 at the > end to a 1 if the assignment was a 'DoubleCheck'. Would I unreserve all the exponents that are already P-1 complete? If I don't change the DoubleCheck into Pfactor then couldn't I just let the exponent run and then sometime after P-1 is done move the entry and the 2 tmp files over to another machine to finish it off? > If you're willing to do all this, then there's another optimisation you > might consider. Since it's only stage 2 that requires the memory, you could > devote your best machine(s) to this task, using your other boxes to feed > them by doing stage 1. That sounds like more work than I care to do. I can see having 1 machine do P-1 on lots of double-checks. > A couple of other points: You are limited in the CPU menu option to > 90% of physical memory, but this may be overridden by editing local.ini, > where you can set available memory to physical memory less 8MB. As an mprime user I edit the local.ini file all the time. Per your notes I upped *Memory to 466. -- [EMAIL PROTECTED] - HMC UNIX Systems Manager _ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
Mersenne: please recommend a machine
Some of you hardware jockeys please give me a clue. I have two machines at home running GIMPS 24-7. One is a P4-2Ghz. The other is a 5 yr old 350 Mhz PII, which is in need of a tech refresh. Clearly there is more to computer performance than clock speed, but for GIMPS I suppose clock speed is everything. Is it? My other machine already has a DVD writer, networked etc, so I need not rebuy that. What should I buy? I have no hard spending limit, but I am looking for value and suppose that a thousand dollar machine would be more than adequate. Does AMD vs Intel matter? Does bus speed matter? spike _ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
Re: Mersenne: P-1 on PIII or P4?
On Thu, Mar 06, 2003 at 02:23:59PM +, Brian J. Beesley wrote: > On Thursday 06 March 2003 13:03, Daran wrote: > > However, some time ago, I was given some information on the actual P-1 > > bounds chosen for exponents of various sizes, running on systems of various > > processor/memory configurations. It turns out that P4s choose *much > > deeper* P-1 bounds than do other processors. For example: > > > > 8233409,63,0,Robreid,done,,4,45,,Athlon,1.0/1.3,90 > > 8234243,63,0,Robreid,done,,4,45,,Celeron,540,80 > > 8234257,63,0,Robreid,done,,45000,742500,,P4,1.4,100 > > > > The last figure is the amount of available memory. The differences between > > 80MB and 100MB, and between 8233409 and 8234257 are too small to account > > for the near doubling in the B2 bound in the case of a P4. > > Yes, that does seem odd. I take it the software version is the same? The only information I have is in the table, however I do not think that differences between the versions can account for the magnitude of the discrepancy. When I first received the data, I checked a few of the exponents to see what bounds my client chose. (I don't recall which version I was running at that time.) The results were a perfect match with the non-P4s, given an identical memory allowance. I've just done a similar experiment with my current client (2.22.2), testing these three exponents with 80, 100, and 120MB, the results were: MB B1 B2 80 45000 472500 100 45000 585000 120 45000 585000 (There were no differences between the three exponents.) I noticed a slight increase in the limits I was getting when I upgraded to this version, and I think that's what were seeing here. Even with 120MB the B2 value is significantly lower on my Duron than with the P4 on 100MB. > The only thing that I can think of is that the stage 2 storage space for > temporaries is critical for exponents around this size such that having 90 > MBytes instead of 100 MBytes results in a reduced number of temporaries, > therefore a slower stage 2 "iteration time", therefore a significantly lower > B2 limit. George modified the software recently so as to estimate memory requirements more conservatively. My reason for performing the above test with 120MB was to guarantee that my client saw more temporaries than did the P4 with 100MB. I looked in the source for an explanation for this some time ago. There is a slight difference in the codepath taken by SSE2 capable processors, but I didn't understand why it should have any such effect. > I note also that the limits being used are typical of DC assignments... That's all the info I have. > ...For > exponents a bit smaller than this, using a P3 with memory configured at 320 > MBytes (also no OS restriction & plenty of physical memory to support it) but > requesting "first test" limits (Pfactor=,,0) I'm getting B2 > ~ 20 B1 e.g. > > [Thu Mar 06 12:07:46 2003] > UID: beejaybee/Simon1, M7479491 completed P-1, B1=9, B2=1732500, E=4, > WY1: C198EE63 I've just tried that exponent. Setting factored bits to 63, and available memory to 320MB, I get the same limits as you. Changing it to a doublecheck, I get B1=4, B2=61. More generally, I find B2 ~ 15 B1 for doublechecks. > The balance between stage 1 and stage 2 should not really depend on the > limits chosen since the number of temporaries required is going to be > independent of the limit, at any rate above an unrealistically small value. I agree. The number of temporaries used depends upon the choice of the D and E parameters, and D is capped at max(2310,sqrt(B2-B1)). However this cap only comes into effect with small exponents. In most cases the available memory will be the limiting factor. > Regards > Brian Beesley Daran G. _ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
Re: Mersenne: P-1 on PIII or P4?
On Thu, Mar 06, 2003 at 08:12:31PM -0800, Chris Marble wrote: > Daran wrote: > > > > Whichever machine you choose for P-1, always give it absolutely as much > > memory as you can without thrashing. There is an upper limit to how much it > > will use, but this is probably in the gigabytes for exponents in even the > > current DC range. > > So I should use the PIII with 1 3/4 GB of RAM to do nothing but P-1. This depends upon what it is you want to maximise. If it's your effective contribution to the project, then yes. Absolutely! This is what I do on my Duron 800 with a 'mere' 1/2GB. The idea is that the deep and efficient P-1 you do replaces the probably much less effective effort that the final recipient of the exponent would otherwise have made (or not made, in the case of a few very old clients that might still be running). I've not done any testing, but I'm pretty sure that it would be worthwhile to put any machine with more than about 250MB available to exclusive P-1 use. On the other hand, you will do your ranking in the producer tables no favours if you go down this route. > It's an > older Xeon with 2MB cache. Will that help too? You'll have to ask George if there is a codepath optimised for this processor. But whether there is or there isn't, should affect only the absolute speed, not the trade-off between P-1 and LL testing. I can't see how a 2MB cache can do any harm, though. > How would I do this? I see the following in undoc.txt: > > You can do P-1 factoring by adding lines to worktodo.ini: > Pfactor=exponent,how_far_factored,has_been_LL_tested_once > For example, Pfactor=1157,64,0 Unfortunately, pure P-1 work is not supported by the Primenet server, so requires a lot of ongoing administration by the user. First you need to decide which range of exponents is optimal for your system(s). (I'll discuss this below). Then you need to obtain *a lot* of exponents in that range to test. I do roughly eighty P-1s on DC exponents in the ten days or so it would take me to do a single LL. The easiest way to get your exponents is probably to email George and tell him what you want. Alternatively, if the server is currently making assignments in your desired range, then you could obtain them by setting 'Always have this many days of work queued up' to 90 days - manual communication to get some exponents - cut and past from worktodo.ini to a worktodo.saved file - manual communication to get some more - cut and past - etc. This is what I do. The result of this will be a worktodo.saved file with a lot of entries that look like this DoubleCheck=8744819,64,0 DoubleCheck=8774009,64,1 ... (or 'Test=...' etc.) Now copy some of these back to your worktodo.ini file, delete every entry ending in a 1 (These ones are already P-1 complete), change 'DoubleCheck=' or 'Test=' into 'Pfactor=', and change the 0 at the end to a 1 if the assignment was a 'DoubleCheck'. When you next contact the server, any completed work will be reported, but the assignments will not be unreserved, unless you act to make this happen. The easiest way to do this is to set 'Always have this many days of work queued up' to 1 day, and copy your completed exponents from your worktodo.saved file back to your worktodo.ini (not forgetting any that were complete when you got them). You do not need to unreserve exponents obtained directly from George. Like I said, It's *a lot* of user administration. It's not nearly as complicated as it sounds, once you get into the routine, but it's definitely not something you can set up, then forget about. If you're willing to do all this, then there's another optimisation you might consider. Since it's only stage 2 that requires the memory, you could devote your best machine(s) to this task, using your other boxes to feed them by doing stage 1. This is assuming that they're networked together. Moving multimegabyte date files via Floppy Disk Transfer Protocol is not recommended. [...] > > If you are testing an exponent which is greater than an entry in the fifth > > column, but less than the corresponding entry int the third column, then > > avoid using a P4. This applies to all types of work. Actually it's worse than this. The limits are soft, so if you are testing an exponent *slightly* less than an entry in column 5, or *slightly* greater than one in column 3, then you should avoid a P4. Choice of exponent range Stage two's memory requirements are not continuous. This remark is probably best illustrated with an example: on my system, when stage 2-ing an exponent in the range 777 through 9071000, the program uses 448MB. If that much memory isn't available, then it uses 241MB. If that's out of range, then the next level down is 199MB, and so on. There are certainly usage levels higher than I can give it. The benefits of using the higher memory levels are threefold. 1. The algorithm runs faster. 2. The program responds by de