Mersenne Digest         Sunday, June 20 1999         Volume 01 : Number 585




----------------------------------------------------------------------

Date: Sat, 19 Jun 1999 13:43:34 -0700
From: Will Edgington <[EMAIL PROTECTED]>
Subject: Re: Mersenne: TI-92 Factoring, again

Brian J. Beesley writes:

   > If it really is that bad, then it's probably not worth doing.  I once
   > tested all the prime exponent Mersennes with exponents from about 10
   > million thru about 21 million for factors smaller than 2^33 or so,
   > using mersfacgmp on a Pentium 90MHz, in a couple of days.

   The factoring program I used to trial-factor the 10 million plus 
   digit Mersenne numbers with exponents up to 36 million up to 2^40 
   takes only 20 _seconds_ to trial factor all 159,975 numbers to a 
   depth of 2^32 on a PII-350. That's including the disk I/O involved in 
   reading the exponents from a file & writing the results. Possible 
   factors were checked for +/-1 (mod 8) and screened for divisibility 
   by the smallest 10 primes before passing to the factoring routine, 
   which was called 1,267,515 times.

That does sound like it's faster, but let me go thru some numbers.  I
don't recall exactly, but I'm sure my earlier run took less than 72
hours; 72 hours divided by 20 seconds is 72*3600/20 = 12960.  Brian
stopped at 2^32 and I went to at least 2^33 for a factor of two;
12960/2 = 6480.  His average exponent is about 34.5 million and mine
was less than half that at 15.5 million or so, meaning I checked about
twice as many potential factors; 6480/2 = 3240.  My exponent range
included 665,365 primes; 6480*159,975/665,365 is about 1560.  My time
also included the disk I/O, the primes program to generate the
exponents and, perhaps, another program running, but call that even;
/usr/games/primes is fast.:)  A PII compared to a Pentium, I have no
numbers on, but 350 MHz vs. 90 MHz is not quite a factor of 4; 1560/4
= 390.  Hm, I looked for all the factors, whereas it looks like the
data Brian sent me only has first factors, but I'm not sure what that
ratio would be; a (probably high) guess of 2 would leave a ratio of
195.  I can't think of anything else, so it does look like Brian's
code is a _lot_ faster.  On the other hand, the last time I compared
mersfacgmp to Prime95's factorer, the ratio wasn't this high (more
like 36 to 40, as I recall).

So, Brian, care to make your code available to the rest of us? :)

If my numbers are close, your code may even be faster than George's.
Have you compared it to George's?  If so, how far off are my numbers
above?  And what did I forget to account for? :)

One other thing that's different is that mersfacgmp sieves using more
than 10 primes; I guess I'd better run some timing tests with smaller
sieves, which has been on my list for a long time now.  And compare it
with Prime95 again.

   The slower your factoring routine is, the more sieving out of trial
   factors helps the execution speed. Though there is always a cutoff
   point - executing even just a Fermat test to base 2 (to determine
   whether the trial factor is probably prime, with low confidence)
   takes longer than the actual trial factoring (the code involved is
   pretty well the same, but 2kp+1 > p).

Yes, I actually almost put a base-2 Fermat test in mersfac.c at one
point, before realizing that it was essentially the 'does it factor?'
check but longer, by exactly that ratio of 2kp+1 to p.

   v19 has FFT sizes up to 2^22 (4 million), it should be capable of 
   handling exponents up to approx. 80 million. Which is more than
   makes sense with current hardware.

Ah; then my info is out of date (not surprising; I haven't had time to
keep up for a while now).

                                                        Will
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sat, 19 Jun 1999 17:53:11 -0500
From: Ken Kriesel <[EMAIL PROTECTED]>
Subject: Mersenne: Poachers beware

I request that anyone running exponents that are not assigned to them
by either the IPS or George Woltman pause, and ask George for a 
nonoverlapping assignment.

Do not assume that simply because an exponent has been assigned for
a lengthy time, that it is being hoarded and not worked on for no good
reason, or is on a terribly slow single system.

I run several dozen cpus in parallel, but am not currently using IPS.
I just emailed George a report of 24 LLtests, 9 of which are below
3960,000, and at least 2 of which were already reported in by a poacher.
No doubt I will discover more poached ones soon, since I had exponents
assigned to me and still running that are lower than the status page
lists as fully tested once: 
"All exponents below 3,774,100 have been tested at least once".

At least some of the untested exponents below 3,960,000 were assigned to 
me in email by George Woltman.  George has been good enough to give me
assignments regularly that are below 5,260,000, because I am set up
with V2.6 primenet and V14.4 prime95, and neither will handle 
exponents above 5,260,000.

For reasons I won't go into right now, I feel bound legally to not
move to the IPS for many cpus.  Both the reclaiming of exponents due
to the intervention of USWest and the FBI in Aaron Blosser's prime95
effort, and the v17 shift count bug, have extended the supply of 
exponents I can run.  Now my choice is to switch to double 
checking, manual operation above 5,260,000, code my own primenet server,
or turn the cpu farm to other tasks when the assignments below 5,260,000
are exhausted.  (In my darker moments, it occurs to me that this cpu
farm would be powerful for factoring more fully, the LLtests of someone
who's been poaching numbers assigned to others.)
I anticipate running out of exponents to test, this calendar year.

ALL exponents below 3,960,000 that I have been assigned have now issued
from the separate primenet server I run, each to one of my many client 
systems, which range from P5-120 to PII-400.  The last time I ran a 386 
on any LLtest was a double check in the 1.2M range.  When I discover 
that a straggler is on a slow box, I move it to one of the fastest to 
move the milestones more quickly.

Rarely, due to a system upgrade, an exponent gets lost.
George periodically reminds me of ranges I have that are old and have
not completed and have escaped my detection of that.  Sometimes I 
manage to find and recover work in progress on backups.
If needed I restart the exponent from the first iteration.
Never have I returned an exponent to George untested, in 3+ years
and 3900+ exponents assigned & 3820+ completed to date.  If George
prompts me that an assignment is late I run it.  I think George was
referring to me when he mentioned a reliable V14 user recently.

I occasionally run exponents out of numerical order, to clean up small
ranges that George assigns to me, and ensure that my clients do not
run idle on a weekend or when I go on vacation.  (I was on vacation
June 7 when the poaching thread was in full flower.)  For a time this spring
I launched exponents in reverse order, since doing so would let the bank
of clients hit the 5M+ exponents for a longer time and finish up on the
quicker lower exponents at about the same time as the bigger slower ones.
When it became apparent that continuing to do so would hold back the
conclusion of the run length and reaching the 4M-tested milestone,
I queued up the smallest exponents first.  These are now completing.

So, just in case I was not clear enough for you above:

"POACHERS KEEP OUT!"


Ken


>Return-Path: [EMAIL PROTECTED]
>X-Sender: woltman@dual400 (Unverified)
>Date: Fri, 29 Jan 1999 14:19:04 -0500
>To: Ken Kriesel <[EMAIL PROTECTED]>
>From: George Woltman <[EMAIL PROTECTED]>
>Subject: Re: mers: range requests
>
>At 04:04 PM 1/24/99 -0600, you wrote:
...
>>I'll need exponents below 5,260,000 for the foreseeable future.
>>I can delay but not prevent the exhaustion of available exponents.
>>Have you any more to give, that will keep me going while I look at
>>alternatives?
>
>You already have these:
>
>4191000,4192000,T,1/17/98,7/11/98,KK
>5258000,5259130,T,3/12/98,5/23/98,KK
>
>You can also have these:
>
>3494300,3498500,T,1/29/99
>3581700,3582000,T,1/29/99,,KK
>3584600,3585600,T,1/29/99,,KK
>3618100,3618300,T,1/29/99,,KK
>3640290,3640650,T,1/29/99,,KK
>3644000,3644500,T,1/29/99,,KK
>3678850,3679850,T,1/29/99,,KK
>3681850,3682100,T,1/29/99,,KK
>3878000,3879400,T,1/29/99,,KK
>3890000,3892380,T,1/29/99,,KK
>3977050,3977170,T,1/29/99,,KK
>3983000,3984600,T,1/29/99,,KK
>3991000,3991350,T,1/29/99,,KK
>
>Is that enough??
>
>Best regards,
>George 


At 10:58 AM 1999/06/14 -0600, Jeremy Blosser wrote:
>My thought is that I don't really mind someone sitting on an exponent, but
>if you have some 386 doing a primary LL test on a number and its going to
>take over a year to accomplish, and its not connected to the internet but
>once a year... then maybe you should assign it a double check or a factoring
>test. It is the most *considerate* thing to do as far as the whole team
>approach is concerned.
>
>Really, the thought that someone wants to sit on an exponent for over a year
>because "I've checked it out, and I'm running it on my 386 that is connected
>to the internet once a year" is more inconsiderate than someone poaching
>your exponent.
>
>As far as poaching is concerned, I unlike my brother Aaron, think that
>George and Scott need to release whatever exponents, since there are only a
>dozen or so that have seemed to have slipped thru the cracks...
>
>I guess I'm just annoyed at the "?" next to M37...
>
>Lastly, I think that threatening to falsify primenet reports, or report
>someone to the FBI for "stealing" is really sophomoric. And I've noticed
>this has come from the anti-poaching camp... I suggest Aaron stop his
>poaching, that Scott and George work on getting all the exponents up to M37
>at least have a primary LL check, and for those who are whining about
>someone poaching your exponents, try and think beyond the scope of yourself
>and think about the GIMPS effort as a whole. Try putting the machine to its
>best use (Factoring as opposed to LL testing).
>
>If you are so concerned about becoming "famous" for finding a new Mersenne
>Prime, spend the extra $200 or so and get a P233 which at least would finish
>its testing in 4 days (in the 3-4M range).
>
>Otherwise I suppose that it would be okay for me to run my GIMPS client on
>my toaster oven, in which case, I think it might finish its 700,000 range LL
>test in 2103... :)
>
>Later,
>Jeremy Blosser
>
>(Note: I am Aaron's BROTHER, not Aaron. Please try not to get confused by
>the fact that we have the same last name (I know thats a foriegn concept to
>some people) and send me nasty e-mails, it just makes you look stupid, and
>annoys me...)
>________________________________________________________________
>Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
>
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sat, 19 Jun 1999 16:08:27 -0700
From: "Scott Kurowski" <[EMAIL PROTECTED]>
Subject: Mersenne: RE: Mersenne Digest V1 #584

> According to the FAQ, "PrimeNet knows when a test result was computed on a
> different computer. It will accept your results for the master database
> log, but it will not credit your account for the test work."
>
> (1) Does this cause a credit problem when a team member gives 2 PCs the
> same Computer ID?  Each PC has its own connection to the internet.

No.  The FAQ answer refers to the server knowing if a result is sneakernetted
from an off-net machine.  It knows because the result transaction is sent by
Prime95 without a matching assignment closure/credit transaction.

> (2) Shouldn't I discourage team members from duplicating Computer IDs?

For the client software, it doesn't matter unless you want to tell them apart on
your account report at the server, but we get better use and planning machine
statistics at the server if you name them.  For example, the number of machines
in PrimeNet is understated on its main status page because of non-unique machine
ID names within accounts - the most popularly duplicatated name being the
"blank" ID.

Regards,
scott


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sat, 19 Jun 1999 16:37:19 -0700
From: "David Christensen" <[EMAIL PROTECTED]>
Subject: Mersenne: A nice little bit of press

For those of you who read PC Magazine, there is a short column by Bill
Machrone in the July 1999 issue on page 85 that talks about GIMPS, Aaron
Blosser, and the US West episode.  Not a detailed examination of what
happened but some good press on why you might want to participate in such a
search by a "responsible research organization."

David Christensen

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sat, 19 Jun 1999 18:00:46 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: A nice little bit of press

> For those of you who read PC Magazine, there is a short column by Bill
> Machrone in the July 1999 issue on page 85 that talks about GIMPS, Aaron
> Blosser, and the US West episode.  Not a detailed examination of what
> happened but some good press on why you might want to participate
> in such a
> search by a "responsible research organization."

Hmmhh..well, whaddya know?  I guess I better go pick up a copy for my "wall
of infamy". :-)

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sat, 19 Jun 1999 21:03:58 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: Mersenne: Tools for FPGA design

> Do you know of any tools for designing FPGAs?
>
> phma

(hope you don't mind; I'm posting to the list in case anyone else is
interested)

The one I used is from Viewlogic (www.viewlogic.com).  They have a full set
of programs for designing, simulating, routing and programming FPGA's.  It
is a VERY nice set of tools, but as you might guess, VERY expensive as well.
I did just order a 30 day evaluation CD from them though...it's been 5 years
since I've used their Viewdraw and Viewsim programs; it'll be interesting to
see how it's changed.

On the flipside, there are programs out there that can let you design,
simulate, route, etc. that are free, but they are text based in nature and,
believe me, not as intuitive to use.

Besides all that, there is a "language" called ABEL that lets you define
your logic tables and will assist in doing the actual gate/timing designs.
There are free versions such as one I used to from Intel...a text based
thing of course.  I'm not sure they still make it or not...plus, there is
one that Viewlogic has, but I've never actually used it, though I heard it
was pretty easy to use (graphical of course).

Viewlogic supports all the different types of FPGA's and PLD's out there via
plug-in device libraries (purchased separately of course).  As you can
imagine, if you want to do FPGA design, the development can get pretty
expensive.  Then if you want to move beyond simulation on a program and
actually program a device, you can opt to either buy a "testbed" which is
essentially a serial cable to go from your PC to the chip, and usually a
little "glue logic" such as tie-down resistors for the chip and a ZIF socket
to hold the FPGA itself.  These aren't usually too expensive, or if you know
what you need you can just build your own...the component parts probably
aren't more than $20-$30 for the whole hardware setup.

Hardware costs once you want to go into production are more since you will
most likely be programming a PROM to hold your program, plus a little
circuitry to automatically program the FPGA once it's powered on...not too
bad, but an EPROM programmer would be needed as well.  But if you managed to
build something as an add-on for a PC, I would envision you could tie it in
to the PCI bus and program it via a datafile on the PC, eliminating the PROM
altogether.  I designed an ISA interface once which is pretty easy...I
wonder if PCI is just as easy to tie into (nothing fancy...just an I/O
address, no IRQ or DMA or anything).

Actually, if you were to design something that was a 256bit multiplier, the
IO wouldn't need to be terribly fast...it would just get 2 256-bit numbers
and spit out a 512-bit number...  Using the ISA bus which is 16 bits wide,
you could send or receive 512 bits in only 64 chunks...at 8.77 MHz, it'd
take ~7.3 microseconds for that alone.  On the other hand, PCI could do it
(33.3 MHz and 32 bits) in 0.96 microseconds...  I guess it depends on how
long the multiply itself would take as to whether PCI would really speed up
the whole thing significantly or not.

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 00:10:35 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Factoring and M38

Well, looks like factoring on TI calculators won't be feasible or useful. :-(

Before more data comes in, I'd like to state that I believe three things:
A) The 38th Mersenne prime discovered has an exponent in the neighborhood of 
6,900,000.
B) We *are* missing a Mersenne prime between 3021377 and the 38th discovered 
prime (all that's been disclosed about it is that it has an exponent between 
6 and 7 million). I believe that this missing prime's exponent is in the 
neighborhood of 4,700,000.
C) The exponent of the Mersenne prime that wins the EFF decamillion digit 
prize will either be just at the limit (33.22 million) or it will be in the 
neighborhood of 48 million.

Time will tell to see if I'm right. :-D Ptoo bad (C) won't be 
confirmed/rejected for a few years.

S.T.L.
(June 19, 1999)
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 00:04:21 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: Mersenne: CPU speeds - Egads!

I was just looking at a CPU comparison at
http://infopad.eecs.berkeley.edu/CIC/summary/local/

The Alpha 21364 at 1GHz is expected to have specint of around 70, and specfp
around 120 (SPEC-95)!!!  Even the 21264 at 667MHz has 44 and 66
respectively.

Compare that to a PIII 500MHz with 20.6 and 14.7 respectively.  Thus my
proclamation of "Egads!!"

Can't someone write a nice assembly routine for the Alpha?  There's a lot of
power in that beast!

I only mention that because I *may* have a couple Alphas sometime (low
probability, but just the same....)

FWIW, the Sparc Ultra III at 600MHz lists their results as "35+ / 60+"  I
*will* have a couple Sparc's headed my way, though I have no idea what model
they are (won't know 'til they get here...).  I'd sure like to have a nice
fast prime program to run on these since these are my "play toys" according
to my boss - "Aaron, I want you to look for prime numbers on these things"
he specifically told me.  Seriously!  That and the RS/6000 I got coming.  I
love my job.

Aaron

PS - Did I share my strange story from last week?  I work right across the
street from one of US WEST's main buildings in downtown Denver.  Last week,
on my way into my building, who should be standing there, waiting for the
light, but Gene Carmer, head of US WEST security and one of the people
present during the execution of my search warrant.  Gee whiz!  Had a nice,
cordial conversation, shook hands, ignored the elephant in the room and
talked about other stuff instead like the possibility of US WEST being
bought by QWEST or Global Crossing.  Such a bizarre encounter, but went
well.  I proved to myself that I'm not taking it personally, they're all
just doing their jobs too, and that one day perhaps all will be well with
the world again. :-)

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 03:01:13 -0400 (EDT)
From: lrwiman <[EMAIL PROTECTED]>
Subject: Mersenne:  More 10,000,000+ factors

I have found 12,209 new factors in the range of Brian's 10,000,000+ digit.
All of the other primes in this ranges have tested through 2^45.

They are avalaible at:
http://www.tasam.com/~lrwiman/fact45
or
http://www.tasam.com/~lrwiman/fact45.gz 

These just include the new factors.

- -Lucas Wiman
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 00:48:04 -0700 (PDT)
From: poke <[EMAIL PROTECTED]>
Subject: RE: OT: Mersenne: ARM Licenses

Great! Can we get FPGA chips at Digikey or Mouser? How much do they go
for?

- -Chuck


On Fri, 18 Jun 1999, Aaron Blosser wrote:

> > My understanding is that it comes with a language of its own. My
> > impression is that it is an icon based language. Kind of like connect the
> > blocks into a flow chart of some sort and hit "GO".
> 
> I'll try and give a cursory explanation of what you can do with FPGA's.
> Bear in mind, I haven't done FPGA work in nearly 5 years, but not much could
> change.

 --
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: WWW: http://www.silverlink.net/poke   :
: E-Mail: [EMAIL PROTECTED]         :
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: Ask Mike! Aviation's response to Dear :
: Abby. http://www.avstarair.com        : 
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 00:51:37 -0700 (PDT)
From: poke <[EMAIL PROTECTED]>
Subject: Re: Mersenne: :-( TI factoring is slow

> It's "testing" 2^25,000,009 - 1 right now. It can test one factor every 1.3 
> seconds. AUGH! At that rate it would take 95 *billion* years to trial divide 
> by all odd numbers under 2^62. Nooo

Don't forget, it's not just odd numbers, you only need to trial divide by
numbers that end in 1, 3, 7 or 9 (where the number is greater than 10).


 --
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: WWW: http://www.silverlink.net/poke   :
: E-Mail: [EMAIL PROTECTED]         :
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: Ask Mike! Aviation's response to Dear :
: Abby. http://www.avstarair.com        : 
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 09:29:01 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Mersenne: Factoring, again

> That does sound like it's faster, but let me go thru some numbers.  I
> don't recall exactly, but I'm sure my earlier run took less than 72
> hours; 72 hours divided by 20 seconds is 72*3600/20 = 12960.  Brian
> stopped at 2^32 and I went to at least 2^33 for a factor of two;
> 12960/2 = 6480.  His average exponent is about 34.5 million and mine
> was less than half that at 15.5 million or so, meaning I checked about
> twice as many potential factors; 6480/2 = 3240.  My exponent range
> included 665,365 primes; 6480*159,975/665,365 is about 1560.  My time
> also included the disk I/O, the primes program to generate the
> exponents and, perhaps, another program running, but call that even;
> /usr/games/primes is fast.:)  A PII compared to a Pentium, I have no
> numbers on, but 350 MHz vs. 90 MHz is not quite a factor of 4; 1560/4
> = 390.  Hm, I looked for all the factors, whereas it looks like the
> data Brian sent me only has first factors, but I'm not sure what that
> ratio would be; a (probably high) guess of 2 would leave a ratio of
> 195.  I can't think of anything else, so it does look like Brian's
> code is a _lot_ faster.  On the other hand, the last time I compared
> mersfacgmp to Prime95's factorer, the ratio wasn't this high (more
> like 36 to 40, as I recall).
> 
> So, Brian, care to make your code available to the rest of us? :)

I put a development version on my anon ftp server about three days 
ago. ftp://lettuce.edsc.ulst.ac.uk/gimps/DecaMega/factor95.c

This is the 95-bit-capable code I gave George about early April. It 
isn't fully optimised, it needs to be turned into MASM source, 
properly aligned & have removed the horrendous & slow kludge used to 
work around a serious MS VC++ inline assembler bug.

The 63-bit-capable code I'm using is simply hacked down from this in 
an obvious way. I didn't even bother to look for any more 
optimization which would be possible as a result of the various 
temporary results being one 32-bit word shorter.

The sieving I'm messing about with is embedded in the "engine" that 
generates possible factors, rather than the factoring code itself.
> 
> If my numbers are close, your code may even be faster than George's.
> Have you compared it to George's?  If so, how far off are my numbers
> above?  And what did I forget to account for? :)

Well, against my code is that it stops as soon as it finds one factor 
(strictly in order, so it should always find the smallest). Quite 
frequently one finds that the first factor one tries works
(if p = 3 mod 4 and 2p+1 is prime, this is _guaranteed_, but 
obviously isn't worth coding as a special case). Also my program 
reads a list of exponents from a file rather than generating them, 
however I don't think there's much speed penalty involved in this 
either way. My code is written so that various multiple-precision 
operations will "early out" is the high word is zero, this happens a 
lot when the factors are < 2^32, mind you I could write a one-stage 
version for factors < 2^31 which would be faster still.

The code I'm using to output factors is inefficient, but I don't 
think that's a major factor. The PII-350 timings are actually for one 
thread on a dual PII-350 system, there were two instances of Prime95 
running LL tests in the background. For this reason I think a fairer 
PII-350/P90 ratio would be 4.5 rather than the usual 5. (4 is 
definitely low...)

Will change the "engine" to keep going to 2^33 after finding the 
first factor & report the results from that. Meanwhile, ran again to 
2^40 with sieveing of first 10 primes & got 266,072,661 calls in 3530 
secs. Bearing in mind sieving overheads etc. it looks like I'm 
getting better than 10 uS per pass of the factoring routine. I don't 
know how this relates to Prime95's internal reporting, the same 
system gets "0.036 sec per iteration" factoring (using the routine 
that works to 2^62) running Prime95, but I don't know what that's 
measuring.

There's definitely a few percent to be wrung out of my code, when I 
get a chance.

Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 06:17:20 -0400
From: Pierre Abbat <[EMAIL PROTECTED]>
Subject: Mersenne: Re: Tools for FPGA design

> The one I used is from Viewlogic (www.viewlogic.com).  They have a full set
> of programs for designing, simulating, routing and programming FPGA's.  It
> is a VERY nice set of tools, but as you might guess, VERY expensive as well.
> I did just order a 30 day evaluation CD from them though...it's been 5 years
> since I've used their Viewdraw and Viewsim programs; it'll be interesting to
> see how it's changed.

Only HP and Solaris. :( Why don't they release the source and let me compile it
myself? Maybe gEDA will include such a program.

By the way, I'm on the list. Please don't send messages to me and the list.

phma
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 10:19:26 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Mersenne: Factoring, again

> Will change the "engine" to keep going to 2^33 after finding the 
> first factor & report the results from that.

OK, here's the results. (All factors to 2^33 found, input is 159,975 
largest primes < 36 million)

Sieve 6 smallest primes, 3517525 calls, 40.15s
Sieve 10 smallest primes, 2972446 calls, 39.82s
Sieve 14 smallest primes, 2693566 calls, 41.75s
Sieve 18 smallest primes, 2515556 calls, 44.76s

(The first 2 primes are sieved out "mod 120" as in George's code. The 
others are done in groups of 4 since you can do arithmetic modulo p 
in an 8 bit field if p<128)

Model the time as T=aX + bY + Z where X is time per call to the 
factoring routine, Y is time to sieve 1 extra prime and Z is a fixed 
overhead. So we have

2693566X + 14Y + Z = 41.75
2972446X + 10Y + Z = 39.82
3517525X +   6Y + Z = 40.15

Solving this system of equations (to a sensible precision) yields
(X = 8.49E-6,Y = 1.074,Z = 3.84)

The result for 18 primes sieved is consistent with this model to 
within a couple of tenths of a second. The timings come from the PC 
RTC, which has a resulution of 55 mS, so this fit is reasonable.

Will, am sending you an extra message containing the "factors found" 
file for verification.

Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 09:57:41 -0700
From: Will Edgington <[EMAIL PROTECTED]>
Subject: Mersenne: Factoring, again

Brian J. Beesley writes:

   I put a development version on my anon ftp server about three days 
   ago. ftp://lettuce.edsc.ulst.ac.uk/gimps/DecaMega/factor95.c

Um, that's an unfortunate choice of name; George's P-1 factorer is
"Factor95" (though there's a newer version, renamed to Factor98).  Of
course, George's "95" was for the year he wrote it, I think, rather
than the number of bits, which I'd guess is your reason for that
name.:)

   This is the 95-bit-capable code I gave George about early April. It
   isn't fully optimised, it needs to be turned into MASM source,
   properly aligned & have removed the horrendous & slow kludge used
   to work around a serious MS VC++ inline assembler bug.

   The 63-bit-capable code I'm using is simply hacked down from this
   in an obvious way. I didn't even bother to look for any more
   optimization which would be possible as a result of the various
   temporary results being one 32-bit word shorter.

And here I thought it was quite fast enough already.:)

   The sieving I'm messing about with is embedded in the "engine" that 
   generates possible factors, rather than the factoring code itself.

Interesting.  That is also, as I recall, how Peter-Lawrence
Montgomery's trial factoring program does this.  Mersfacgmp does the
sieving and factoring in the same loop, which probably costs it quite
a bit in terms of locality in comparison.

   Well, against my code is that it stops as soon as it finds one factor 

So does George's (in each sieve pass), but not mersfacgmp (unless it's
told to on the command line) nor (I think) PLM's program, triald.

   Quite frequently one finds that the first factor one tries works
   (if p = 3 mod 4 and 2p+1 is prime, this is _guaranteed_, but
   obviously isn't worth coding as a special case).

Yup.

   Also my program reads a list of exponents from a file rather than
   generating them,

So does mersfacgmp, George's, and PLM's.  Though mersfacgmp can also
read from standard input; it doesn't have to be a file.  Mersfacgmp
can also read its own output, to continue from earlier work, whether
a factor was found or not.

   however I don't think there's much speed penalty involved in this
   either way. My code is written so that various multiple-precision
   operations will "early out" is the high word is zero, this happens
   a lot when the factors are < 2^32,

Yup.  Libgmp also has this, to an extent, since numbers are stored in
a variable length array.

   mind you I could write a one-stage version for factors < 2^31 which
   would be faster still.

Somehow I don't think we need that.:)  In fact, I could already
generate a list of all primes up to 2^32 and the Mersenne exponents,
composite or not, that they factor, using my "reverse" method.  That
might even be less than a day on my new P233 MMX.  The only reason I
haven't done it, actually, is that I _still_ don't have enough disk
space ... unless I save the data in some compact, binary format.  But
I may run it anyway, and then keep only the data for prime exponents
under 100 million or so.

   The code I'm using to output factors is inefficient, but I don't 
   think that's a major factor.

Almost certainly correct.  Though running it with profiling would tell
us for sure.

   The PII-350 timings are actually for one thread on a dual PII-350
   system, there were two instances of Prime95 running LL tests in the
   background. For this reason I think a fairer PII-350/P90 ratio
   would be 4.5 rather than the usual 5. (4 is definitely low...)

Probably, but recall that I fudged some the other way because there
was probably another program running on my machine as well.  And, if
there was, it was at the same priority, not (relatively) nice'd.  Not
that it really matters; your code is obviously faster.

   Will change the "engine" to keep going to 2^33 after finding the 
   first factor & report the results from that.

Thanks; I've already got and unpacked the data and it looks fine.
I'll compare to what mersfacgmp has done - it continued from your
first factors to the next power of two as part of my update scripts -
to double check everything.

   Meanwhile, ran again to 2^40 with sieveing of first 10 primes & got
   266,072,661 calls in 3530 secs. Bearing in mind sieving overheads
   etc. it looks like I'm getting better than 10 uS per pass of the
   factoring routine. I don't know how this relates to Prime95's
   internal reporting, the same system gets "0.036 sec per iteration"
   factoring (using the routine that works to 2^62) running Prime95,
   but I don't know what that's measuring.

I'd guess George will let you know.

   There's definitely a few percent to be wrung out of my code, when I 
   get a chance.

Even better.:)

                                                        Will
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 10:53:12 -0700
From: "John R Pierce" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: Tools for FPGA design

> Only HP and Solaris. :( Why don't they release the source and let me
compile it
> myself? Maybe gEDA will include such a program.

if you are paying $50,000 to $100,000 a seat for a program, you BUY the
computer it needs.

Other vendors of similar FPGA software include Vantis (formerly AMD's PAL
group, now owned by Lattice) with their CUPL and related tools, Synplicity
who has a new logic sythesis package which is good at automatically
partitioning a large complex project into multiple CPLD or FPGAs.
http://www.vantis.com and http://www.synplicity.com naturally :)

Primary vendors of high density CPLD/FPGA chips are Altera, Xylinx, Actel,
and Lattice.  These chip companies each tend to also have their own design
software, often somewhat cheaper than the 3rd party stuff.

A typical smaller FPGA with 90K gates equivalent might sell in very high
volumes for $10 each.





________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Mon, 21 Jun 1999 00:42:48 +0200
From: "Otto Bruggeman" <[EMAIL PROTECTED]>
Subject: Mersenne: Question about speed...

Hi,

Sorry to bother you people with this but can anybody tell me why my celeron
400 all of a sudden slows to (almost) half speed when it reaches the
1,000,000,000 (actually a little more, i guess it's around 2^30) mark in
factoring numbers??? My p233mmx slows about 15 percent. The numbers
currently being factored are in the 9.7 million range (shoudn't matter, it
happened also in the 7.5 million range...).

Thanks in advance...

Otto.


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 15:53:12 -0700
From: Luke Welsh <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Mersenne exponent growth

At 12:55 PM 6/19/99 -0400, Jeff Woods wrote:
>[...] the "pseudo-conjecture" [...]
>Starting at about M13, you see that there are indeed islands: [...]
>Graph that you, and you get a VERY close linear fit.  I haven't done it 
>mathematically, [...]
>Given this fit, [...]

Sounds like fitting the data to a curve.

Suppose we define an "Island", then work from Peter Montgomery's post
       http://www2.netdoor.com/~acurry/mersenne/archive2/0032.html
and generate random data sets, then look for islands?

But first, *define* an island....I've never seen a definition.

Any takers?

As long as we're fitting data to a curve, how about substituting
pi/2 for 3/2?  Perhaps something from
    http://www.lacim.uqam.ca/pi/table.html ?
(my sentimental favorite is phi)

- --Luke, the Professor and Mary Ann

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 19:01:23 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Question about speed...

At 12:42 AM 6/21/99 +0200, Otto Bruggeman wrote:
>
>Sorry to bother you people with this but can anybody tell me why my celeron
>400 all of a sudden slows to (almost) half speed when it reaches the
>1,000,000,000 (actually a little more, i guess it's around 2^30) mark in
>factoring numbers??? My p233mmx slows about 15 percent. 

I'm just guessing, but it could be that the Celeron's smaller cache gets filled
up.

+----------------------------------------------+
| Jud "program first and think later" McCranie |
+----------------------------------------------+


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Sun, 20 Jun 1999 19:07:54 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Mersenne exponent growth

At 03:53 PM 6/20/99 -0700, Luke Welsh wrote:

>As long as we're fitting data to a curve, how about substituting
>pi/2 for 3/2?  

If you fit the curve to the data, 3/2 works a lot better.  The best fit is for
a slope of 1.4785, which has a very high correlation coefficient of 0.996.

+----------------------------------------------+
| Jud "program first and think later" McCranie |
+----------------------------------------------+


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

End of Mersenne Digest V1 #585
******************************

Reply via email to