Re: Mersenne: V20 beta #4 (will the beta never end?)

2000-04-20 Thread David Campeau

Hi all,

Seeing that not every time a stage 1 gcd will result in a factor found, are
we not better to wait until the end of stage 2? This way, we could factor
deeper.

Perhaps this could be yet another option?

some preleminary data (on my machine at home):
Total P-1 test = 91
Stage 1 factor = 2
Stage 2 factor = 1

So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of cpu,
but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
hours. Seems to me that we could save a little bit by forgoing stage 1 gcd.

regards,

David Campeau
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: V20 beta #4 (will the beta never end?)

2000-04-20 Thread Brian J. Beesley

On 20 Apr 00, at 7:47, David Campeau wrote:

 Seeing that not every time a stage 1 gcd will result in a factor found,
 are we not better to wait until the end of stage 2? This way, we could
 factor deeper.
 
 Perhaps this could be yet another option?

Maybe I'm guilty here; the very first pre-pre-release didn't run the 
GCD after Stage 1 unless an "undocumented" option was set in 
prime.ini (Stage1GCD=1). I commented on this to George and he changed 
it. My reasoning was that early timings on exponents typical of 
"real" PrimeNet assignments seemed to suggest that it made sense to 
run the Stage1 GCD automatically.

There's also a slight edge in clarity from this approach. If you run 
a GCD at the end of each stage then you know where you are. If you 
run GCD only at the end of Stage 2 but then don't run Stage 2 because 
of memory constraints (which is effectively the default option, given 
DayMemory = NightMemory = 8 MBytes) you might as well not bother 
running Stage 1 either, since you won't find any factors that Stage 1 
uncovered until you eventually run the GCD.

There's another consideration, too, in so far that running Stage 2 is 
expensive in terms of memory resources; avoiding this complication 
tends to push the balance in favour of running the extra GCD.

However, there may be a case for changing the strategy - either just 
running one GCD when you're not going to go any further, or to 
(optionally) omit the Stage 1 GCD. This becomes more relevant as 
exponent sizes increase, as the stage run time appears to be (more or 
less) linearly related to both exponent size and B limit, whereas the 
run time of the GCD is independent of the B limit but increases very 
non-linearly with exponent size.

The cost/benefit of running the extra Stage 1 GCD is zero if GCD run 
time is equal to Stage 2 run time * probability of finding a factor 
in Stage 1. If the GCD runs more quickly than this, it's definitely 
worth running. The point is that running Stage 2 is a waste of time 
if you already have a factor in Stage 1 but you just haven't bothered 
to unearth it. If you're working with small exponents, it may be 
that, although the extra GCD is relatively cheap, it's still not 
worth running it because e.g. heavy trial factoring has made the 
chance of actually finding a factor in Stage 1 very low - i.e. in 
this case we're running Stage 1 only as a means of getting to Stage 
2.

It _may_ be sensible  cost effective to automate this decision (i.e. 
"guess" the probability of finding a factor in Stage 1 from the B1 
limit and the trial factoring depth, and estimate the GCD  Stage 2 
run times)  use that as the default option. In which case we should 
probably have manual override capability in _both_ directions.


Regards
Brian Beesley
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Facelift (round 3)

2000-04-20 Thread Alan Vidmar
Based upon what I have seen so far, I want to add a vote for the  addition of a side frame.  Since the Header/Menu contains so many  links, its really the only nice way of handling it.

All in favor of a side frame please speak up.

Alan

On 19 Apr 2000, at 21:00, George Woltman wrote:

Date sent:  	Wed, 19 Apr 2000 21:00:57 -0400
To: 	[EMAIL PROTECTED]
From:   	George Woltman [EMAIL PROTECTED]>
Subject:	Mersenne: Facelift (round 3)

Hi again,  I looked at the MS site with its dropdown menus. It was far more javascript than I want to wade through. So I've come up with a simpler header that incorporates a menu. Let me know if you like this (especially as compared to the more traditional menu on the side). It does get us more horizontal real-estate to display the status and benchmark tables.  The latest incarnation (with non-operational menus) can be viewed at:  ,,FF00>href="http://www.mersenne.org/newhtml2/prime.htm" eudora="autourl"htm as opposed to the p ersenne.org/newhtml/prime.htm" eudora=ersenne.org/newhtml/prime.htm" eudora="autourl"htm More comments are Thanks again, George

"A programmer is a person who turns coffee into software."
Alan R. Vidmar   Assistant Director of IT
Office of Financial AidUniversity of Colorado
[EMAIL PROTECTED](303)492-3598
*** This message printed with 100% recycled electrons ***
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: V20 beta #4 (will the beta never end?)

2000-04-20 Thread George Woltman

Hi,

At 07:47 AM 4/20/00 -0400, David Campeau wrote:
Seeing that not every time a stage 1 gcd will result in a factor found, are
we not better to wait until the end of stage 2? This way, we could factor
deeper.

Brian Beesley's timings showed that running the Stage 1 GCD will
save time in the long run for exponents above 4 or 5 million.

Perhaps this could be yet another option?

It is an option.  Set Stage1GCD=0 in prime.ini.

some preleminary data (on my machine at home):
Total P-1 test = 91
Stage 1 factor = 2
Stage 2 factor = 1

So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of cpu,
but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
hours. Seems to me that we could save a little bit by forgoing stage 1 gcd.

I know you are working on the smallest double-checks (about 3 million
or so).  Thus, it is not surprising you would be better off not running
the stage 1 GCD.  First-time testers will be better off running the stage 1
GCD, and most double-checkers will be neutral to slightly better off.

The GCD cost grows at an N (log N)^2 rate.  The stage 2 cost grows at
N log N (the cost of an FFT multiply) times something (the stage 2
bounds grow as N increases).  I don't know if that something is
O(N) or worse.  It doesn't matter.  It does show that at
some point you are better off doing the stage 1 GCD for a 2%
chance of saving the cost of stage 2.

Regards,
George

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: V20 beta #4 (will the beta never end?)

2000-04-20 Thread Nathan Russell



From: George Woltman [EMAIL PROTECTED]
To: "David Campeau" [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)
Date: Thu, 20 Apr 2000 11:43:46 -0400

Hi,

At 07:47 AM 4/20/00 -0400, David Campeau wrote:
Seeing that not every time a stage 1 gcd will result in a factor found, 
are
we not better to wait until the end of stage 2? This way, we could factor
deeper.

Brian Beesley's timings showed that running the Stage 1 GCD will
save time in the long run for exponents above 4 or 5 million.

This will, of course, be almost every exponent within a month or so of v20's 
release to the public - as it stands, most of the lower exponents are being 
taken by those of us who connect immediately after they expire.

some preleminary data (on my machine at home):
Total P-1 test = 91
Stage 1 factor = 2
Stage 2 factor = 1

So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of 
cpu,
but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
hours. Seems to me that we could save a little bit by forgoing stage 1 
gcd.

Of course, some users will be inconvenienced by the memory usage in Stage 2 
and may want to take that into consideration.  Since I myself have 128 megs 
of memory, and rarely run anything except Netscape, AIM, MS Office and 
shareware games, P-1 for PrimeNet is not a major problem for me, aside from 
the slight delay in my assignments when I get new work.


I know you are working on the smallest double-checks (about 3 million
or so).  Thus, it is not surprising you would be better off not running
the stage 1 GCD.  First-time testers will be better off running the stage 1
GCD, and most double-checkers will be neutral to slightly better off.

And, of course, double-checkers tend to be people who have more of an 
interest in the project, and may be running multiple clients.  They are, 
therefore, more likely to take the time to analyze the documentation.  Not 
to be stereotyped, that's just a general pattern that my common sense leads 
me to expect.


The GCD cost grows at an N (log N)^2 rate.  The stage 2 cost grows at
N log N (the cost of an FFT multiply) times something (the stage 2
bounds grow as N increases).  I don't know if that something is
O(N) or worse.  It doesn't matter.  It does show that at
some point you are better off doing the stage 1 GCD for a 2%
chance of saving the cost of stage 2.

Regards,
George

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers

__
Get Your Private, Free Email at http://www.hotmail.com

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Sorry for the double post

2000-04-20 Thread Nathan Russell

Well, it wasn't strictly a double post - my browser crashed just as I was 
sending the second, and I wasn't sure whether it'd been sent, so I rewrote 
it.

Nathan
__
Get Your Private, Free Email at http://www.hotmail.com

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Re: factoring

2000-04-20 Thread EWMAYER

Brian Beesley writes (re. Mfactor):

 I found that Mfactor on a 21164 was significantly
 faster than Prime95 on a system which runs LL tests at 
 about the same rate. However the operational
 inefficiencies caused by driving Mfactor manually
 were outweighing this even before an improved LL
 tester for the Alpha platform came along (Thanks!)

Re. the manual aspect, Martijn Kruithof suggests

 We are talking unix here aren't we?

 You could make an input file with the factors (As you
 must enter them, including any stuff you must enter
 before you enter the first factor) and then run
 mfactor in the background

 nohup mfactor  input.file  output.file 

 then you can log out.

..which significantly eases the manual aspect, but not
the automatic reporting one.

I wrote:

  I think the suggestion to have one machine (whether
  that be a PC running Prime95 or a MIPS or Alpha
  running Mfactor) do all the factoring needed to keep
  multiple non-PC machines well-fed with exponents is
  a good one, since otherwise, juggling factoring and
  LL work becomes a pain.

to which Brian replies:

 Yes. Despite the inefficiency I find the best thing to
 do is to use an Intel PC running mprime/linux or
 Prime95/windoze to do the factoring. The point is one
 can simply take the worktodo.ini lines off PrimeNet
 (manual testing), change "Test" to "Factor"  stuff
 the file into George's program - without even
 bothering to check the factoring limits. Those that
 are already factored deep enough just get thrown out
 straight away.

 This is even more the case since v20 with P-1 factoring
 capability came out. Now one changes

 Test=exponent,depth

 to

 Pfactor=exponent,depth,0

 or

 DoubleCheck=exponent,depth

 to

 Pfactor=exponent,depth,1

  waits for any neccessary trial factoring plus the
 P-1 factoring to be run.

 A P100 running (trial  P-1) factoring will easily
 keep a couple of PII-400s (or equivalent) busy running
 nothing but LL tests. The other advantage of doing it
 this way is that the the factoring system will report
 results to PrimeNet for you, automatically if you so
 choose.

Sounds like the right way to go to me, unless one has a
fast Alpha and really wants a thrills-a-minute kind of
ride. Note that the 21064 and 21164 are not as fast at
factoring as the 21264, but it seems a shame to have a
big fat L2 cache (the 21264s typically have 4 or 8MB)
and not use it (Mfactor gains very little in speed from
a large L2, since on the 21264, the 64MB L1 cache is
already big enough to hold a decently large small-primes
sieving table).

Come to think of it, factoring would be an excellent
application for a 21264 without an L2 cache, but I don't
know if they come that way (except perhaps in a
massively parallel setting, where the problem of
maintaining cache coherency in a multilevel cache
hierarchy often is nearly intractable). Then again, if
one turned a MP machine to factoring, GIMPS might soon
run out of factoring assignments, which would not please
those with 486s and other slower CPUs whose only useful
contribution to GIMPS is factoring.

Cheers,
-Ernst


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 65 bits (was Re: factoring)

2000-04-20 Thread Nathan Russell

(much snippage dealing with factoring on behalf of non-PC machines)

Then again, if
one turned a MP machine to factoring, GIMPS might soon
run out of factoring assignments, which would not please
those with 486s and other slower CPUs whose only useful
contribution to GIMPS is factoring.

Cheers,
-Ernst

It will only get worse for the 486 users when PrimeNet begins handing out 
exponents that will need factoring to 65 bits.  Of course, they can use 
factoroveride, but that's not helpful to the network as a whole.

For those who don't know, due to CPU design reasons that I don't claim to 
understand, factoring to 65 bits takes easily four or five times as long as 
factoring to 64 bits.  In version 19, it was set to take place for exponents 
13.38M and up.  I believe PrimeNet will reach this point in a few months.

Currently, PrimeNet is set to award half credit for factoring.  Should that 
be adjusted once the LL testers begin to catch up with the factorers, and it 
becomes increasingly difficult for non-PC users to find fully factored 
assignments?

Nathan
__
Get Your Private, Free Email at http://www.hotmail.com

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Another distributed project

2000-04-20 Thread McMac

OK, so I know this might not be that warmly welcomed here, but a
few people have expressed concerns that while finding new Mersennes
is interesting, it has very limited practical value.

A while ago I heard about another distributed project called Casino-21
that would use people's PCs to run climate models and use the results
of these thousands of models to find out what is most likely to happen
over the next 50 years. The run time would be approx. 6 months minimum,
with high memory and disk space requirements as well. I said that I
was interested in the program and then forgot about over the next few
months.

They have just sent me (well, not just me) an e-mail that things are
still moving along, and they have 20'000 odd e-mail addresses showing
interest in the program.

It's certainly an interesting idea, and one with some very real
consequences for science, politics and our general well-being. If
you're interested the site is at http://www.climate-dynamics.rl.ac.uk

Clients won't be available for some time (and probably only for Linux
and Windows this year) but I think I might move to their project when
they do arrive.

McMac
When everything's coming your way, you're in the wrong lane.

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers