Mersenne: LL and Pollar-rho in one?

1999-10-28 Thread Alexander Kruppa

Hi,

the Lucas-Lehmer iteration

L_0 = 4
L_{n+1} = L_n ^2 -2

looks suspiciously like an iteration used in Pollard-Rho:

x_0 = a
x_{n+1} = x_n ^2 +b

Can this be used to do a LL test and Pollard-rho factoring attempt at
once?
I remember faintly having read something that b=-2 is not a good choice
for Pollard-rho, but I don't remember any explanation why.. would it
work here?

Ciao,
  Alex.
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Trial-factorers

1999-10-28 Thread Alexander Kruppa

Eric Hahn wrote:
 
 I'm looking for program(s) capable of trial-factoring
 prime exponent Mersenne numbers (using 2kp+1) meeting
 the following requirements:

 [factors and exponents of arbitrary size]
 

mersfacgmp from the Will Edgingtons mers package uses gmp's mpz type for
factors (thus the size of the factor is only limited by your available
memory) and long ints for the exponent. If you have a 64 bit cpu, that
should allow exponentes up to 18446744 trillion. 

 I just know somebody is going to have to mention the
 time involved in testing factors of such a large size.
 Let me just say, I realize *exactly* how much time
 would be required...

Ok, but know that gmp is not particulary efficient on "small" number of
only a few dozend decimals, the time required might be what you expect
and then some.


 3) Capable of trial-factoring a range of k's.
(example: from k=1000 to k=2500)

Hmmm, I dont think mersfacgmp can process this directly, but it would be
easy to compute the proper factoring limits with a shell script that
calls bc.

The mers package can be found on Will's home page,
http://www.garlic.com/~wedgingt/mersenne.html

Ciao,
  Alex.
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Trial-factorers

1999-10-28 Thread Will Edgington


Eric Hahn writes:

   I'm looking for program(s) capable of trial-factoring 
   prime exponent Mersenne numbers (using 2kp+1) meeting
   the following requirements:

   1) Capable of trial-factoring any exponent  1 
  (at least to some considerably large number,
   say 1 trillion?)

If your C compiler supports a 64 bit integer type, the mers package
factorers mersfacgmp and mersfaclip can use that for the exponent,
allowing exponents to about 2^64/24 (about 768 quadrillion, ~7e17).
The '/24' can be reduced to '/2' by using a lower number of sieve
passes.  If you don't have a 64 bit integer type, then you're limited
to 2^32/24 or so (about 178 million).  See setup.h for the #define's
of BIG_LONG.

   As I recall, Brian [Beesley] mentioned something once
   about having a program that could test an exponent
   of an arbitrary size...  Brian??

The mersfac* code could be modified to allow arbitrary exponents
without a lot of work, but it wouldn't be trivial to do either.  I do
intend on doing it someday, however, to support the factoring of the
M(M(p))'s, as the sieving of trial factors in mmtrial is not as good
as that in mersfac*.

   2) Capable of testing a factor of any size.
  (even over the 2^76 limit of Prime95).

Mersfacgmp and mersfaclip use the FSF's libgmp and A. Lenstra's
freeLIP libraries, respectively, for the trial factors.

   3) Capable of trial-factoring a range of k's.
  (example: from k=1000 to k=2500)

As Alexander Kruppa noted, this is not supported directly, but could
be done fairly easily with bc.  And it could be added to mersfac*
directly without a lot of effort; see rw.c where it reads the command
line and the input file(s), especially the -m flag.  The -a flag is
probably also of interest; without it, mersfac* will stop when it
finds a factor rather than looking for all the factors in a range.

The simplest way to do it with bc would be to create 'G' (gap) lines
like:

M( exp )G: start stop

where 'exp' is the exponent and 'start' and 'stop' are the first
and last trial factors to test.

In any case, part of the output will be 'J' (join) lines, that show
exactly what range of trial factors were actually tested; please
include them in any output of mersfac* that you send to me.

  Will

http://www.garlic.com/~wedgingt/mersenne.html
mers.tar.gz
mers.tgz
mers.zip
`
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Quiet

1999-10-28 Thread Lars Lindley

I think it's about time we find a new prime...
The list is so quiet now.
:)

/Lars

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: LL and Pollar-rho in one?

1999-10-28 Thread Alexander Kruppa

Jason Stratos Papadopoulos wrote:
 
 On Thu, 28 Oct 1999, Alexander Kruppa wrote:
 
  Hi,
 
  the Lucas-Lehmer iteration [...]
  looks suspiciously like an iteration used in Pollard-Rho: [...]
  Can this be used to do a LL test and Pollard-rho factoring attempt at
  once?
 
 Except that every once in a while you'd have to perform a GCD, and
 a GCD on million-digit size numbers takes about a day (no kidding!
 There seems to be no fast way to do one!)

Thats true (btw, I keep hearing of a n*log(n) gcd - how long would that
take then? Where can I read up on it - Knuth vol.2 ?) but doing the
Pollard-Rho along a LL test would not be particularly efficient,
anyways. For every 2 LL iterations, you'd have to do another one for the
cycle find algorithm and a multiply to store up any factor you find -
thats 9 transforms instead of 4, so your test time will more than
double. And Pollard-Rho is probably not very well suited for finding
Mersenne factors as it cannot exploit the fact that these factor are 1
mod (2p) as P-1 can.
I'm mostly asking for curiosity, whether the LL iteration really makes a
proper Pollard-Rho iteration, especially with the -2.

Ciao,
  Alex.
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: LL and Pollard-rho in one?

1999-10-28 Thread Peter-Lawrence . Montgomery

Alexander Kruppa [EMAIL PROTECTED] observes:

 Hi,
 
 the Lucas-Lehmer iteration
 
 L_0 = 4
 L_{n+1} = L_n ^2 -2
 
 looks suspiciously like an iteration used in Pollard-Rho:
 
 x_0 = a
 x_{n+1} = x_n ^2 +b
 
 Can this be used to do a LL test and Pollard-rho factoring attempt at
 once?
 I remember faintly having read something that b=-2 is not a good choice
 for Pollard-rho, but I don't remember any explanation why.. would it
 work here?


 If L_0 = alpha + 1/alpha (where alpha = 2 + sqrt(3) is found
by solving a quadratic) then

L_n = alpha^(2^n) + alpha^(-2^n)

L_m - L_n = alpha^(2^m) + alpha^(-2^m) - alpha^(2^n) - alpha^(-2^n)
  = (alpha^(2^m) - alpha^(2^n)) + (1 - alpha^(-2^m  - 2^n))

If p is a factor of the number we are trying to factor, then 
the order of alpha will divide one of p+1 and p-1
(depending upon whether alpha is in the base field GF(p) 
or the extension field GF(p^2)).
The size of this order will be O(p).  
m and n will usually need to have size O(p) before we achieve

alpha^(2^m) == alpha^(+- 2^n)   (mod p) 

(which is the same as   2^m == +- 2^n   (mod (order of alpha)) ).

When b  0, -2, we know no such formula for the iterates of Pollard
Rho.   x - x^2 + b (mod p) seems to behave like a pseudorandom function 
modulo p, and Pollard Rho takes O(sqrt(p)) iterations rather than O(p).

For Lucas-Lehmer, when p = 2^q - 1 is a Mersenne prime, p+1 is
divisible by a very high :-) power of 2.  The L_n values collapse to 
0, -2, 2, 2, ... as alpha^(2^n) becomes sqrt(-1), -1, 1 in GF(p^2).




_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: LL and Pollard-rho in one?

1999-10-28 Thread chris_nash


 but doing the Pollard-Rho along a LL test would not
 be particularly efficient, anyways.

Or particularly successful. Remember Pollard-rho heuristically expects to find a 
factor p in something along the lines of sqrt(p) iterations. Since we're doing lets 
call it 10^7 iterations, you'd probably be better off trial-factoring to 10^14 - which 
is already done and beyond with guarantees no factor is missed. You might get a lucky 
factor that's larger, but experience (and the law of averages) tells you you aren't 
going to be *that* lucky.

For every 2 LL iterations, you'd have to do another
one for the cycle find algorithm and a multiply to
store up any factor you find. Thats 9 transforms
instead of 4

Brent's modification of Pollard-rho doesn't require storing the two parallel sequences 
x_n and x_2n. Instead consider x_n-x_k, where k is the greatest power of 2 that's less 
than n. At worst it could take twice as long to find a cycle, but it's at least twice 
as fast.

 And Pollard-Rho is probably not very well suited for
 finding Mersenne factors as it cannot exploit the
 fact that these factor are 1 mod (2p) as P-1 can.

The extra exponentiation at the start of the P-1 algorithm is hardly a great 
exploitation. Note that 'rho' definition of Pollard-rho just means that your iteration 
function should be (pseudo)random - you can create a pseudorandom iteration that does 
exploit the form of factors. Of course, that's no longer an LL iteration though.

 I'm mostly asking for curiosity, whether the LL
 iteration really makes a proper Pollard-Rho
 iteration, especially with the -2.

The classic Pollard-rho iteration x - x^2+a isn't particularly good with a=0 or a=-2. 
The reason is the way the cycles degenerate. You want one of the cycles mod some 
unknown prime factor to be short. What you don't want is all the cycles to collapse at 
the same time... or never collapse at all. Suppose you applied the same Pollard-rho 
iteration simultaneously to all (or at least many) possible initial points mod N (this 
is a reportedly near-perfect parallelization according to a paper by Dick Crandall). 
Why Crandall's parallelization works is that its inevitable that the application of 
the iteration reduces the number of distinct points on each pass until eventually your 
N initial points are folded down to quite short and detectable cycle lengths. However, 
iterate with 0 and -2 and there are some obvious fixed points (solve the quadratic!) 
and other, less obvious, short cycles. In effect, there are some points you can't 
iterate away no matter how long you keep trying.

There's a good visual indicator that 0 and -2 aren't particularly good. z - z^2+c is 
the Julia set iteration on the complex plane. Let's assume for the moment that somehow 
the behavior of the Pollard-rho iteration mod N and the behavior of the Mandelbrot 
iteration on C are equivalent - they are, but the mapping between them is hardly 
trivial.

The Julia sets for c=0 and c=-2 are devastatingly boring, their iteration brings you 
no surprises, no pretty pictures. In much the same way you're not going to get any 
exciting factors with this iteration mod N, either.

In Lucas-Lehmer terms, what happens during the LL / Pollard-rho iteration is that all 
the prime factors of the number have interlocked cycle lengths. That's great for 
primality proving (because if you get the expected final residue, you know something 
about *all* prime factors of your number - and hence conclude there can be only one). 
But a failed LL test simply tells you you now know *all* prime factors of your number 
failed identically. There's nothing in the LL recipe that distinguishes any one prime 
factor from any other.

Chris Nash
Lexington KY
UNITED STATES



_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Re: Quiet

1999-10-28 Thread Steinar H. Gunderson

On Thu, Oct 28, 1999 at 05:12:46PM +0200, Lars Lindley wrote:
I think it's about time we find a new prime...
The list is so quiet now.

I'm working on it! ;-)

I think a quiet list is better than a list in rage -- don't try
to start a poaching war again, please... The list isn't that
quiet either.

/* Steinar */
-- 
Homepage: http://members.xoom.com/sneeze/
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: GrOGfest '99 reminder

1999-10-28 Thread EWMAYER

Dear All:

Just a friendly reminder that the second not-quite annual Great Internet
Mersenne Prime Search OktoberGeekfest get-together-kind-of-deal is set
for tomorrow evening.

What: GrOGfest '99
Where: Tied House, Mountain View, California USA
When: Friday, 28. October, beginning at 6:30 pm PDT
Bring: Yourself, and anyone else you like
Dress: casual

I've only heard from a couple GIMPSers who say they'll attend, so I've
reserved a table for 10 on the patio (keyword: GIMPS). Luke Welsh,
Scott Kurowski and I will be there, and perhaps one or two local
Mersenne-type notables.

I wanted to invite Don Knuth, but Luke told me that last time they were
at a party together, DEK grabbed a half-empty minikeg of Lo"wenbra"u
and was running around with it under his arm, spraying partygoers with
beer and shouting, "you're all wet!" I think we'll just send him a card and 
photo.

(: .esruoc fo ,gniddik m`I)

Cheers,
-Ernst

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Re: fast GCD

1999-10-28 Thread EWMAYER

Alex Kruppa wrote:

 Can this be used to do a LL test and Pollard-rho factoring attempt at
 once?

Jason Papadopoulos wrote:

 Except that every once in a while you'd have to perform a GCD, and
 a GCD on million-digit size numbers takes about a day (no kidding!
 There seems to be no fast way to do one!)

A day seems somewhat of an overestimate (though maybe it takes that
long on a Pentium - I don't know.) On a decently fast Alpha, say a 500MHz
21164, I can do a million-digit (and note I mean base-10 digits, not bits)
GCD in under 5 minutes using a simple O(n^2) Lehmer-type implementation
with scalar multipliers of around 63 bits - one gets similar performance
on MIPS after adjusting for the typically lower clock rates. (Things should
be 2-3x faster on Alpha 21264, due its much better integer multiply
The code in question is my own (and will be publically available when I
release my Mersenne p-1 code), but Torbjorn Granlund sent me timings that
indicate the GMP library's GCD is similarly fast on such hardware. The
Alpha implementation uses the UMULH isntuction to get the upper 64 bits
of the 64x64==128-bit unsigned integer products which arise when
multiplying the digits of a base-2^64 multiword integer by a scalar
of up to 64 bits in size, and the MIPS version uses DMULTU to get the
full 128-bit product. On CPUs supporting 64-bit integers and floats
but lacking such 128-bit multiply instructions, one can emulate UMULH via
a floating multiply, using scalars up to 52 bits in size (we need to save
one of the mantissa bits to effect error correction of the FMUL result.)

On the Intel IA-64, there is hardware support for a (now fully 64-bit)
UMULH-like instruction in the floating multiplier (also used to get the
low 64 bits of the product.) One can in theory do similar on the Pentium,
but for this to be efficient, one needs to be able to load a 64-bit int
directly into the 64-bit mantissa of a floating-point register - I don't
know enough about the x86 to say whether one can do such an operation
on current CPUs. I think George uses similar tricks to do the speedy
Prime95 sieve-based factoring on the Pentium, so it seems it should be
possible.

Lastly, note that one doesn't need to do many GCDs if one is running, say,
a p-1 or ECM factoring algorithm - for large numbers, one could restrict
oneself to to doing a single GCD at the end of stage 1 and perhaps every
day or so in stage 2. This makes a "fast" (i.e. asymptotically faster
than O(n^2)) GCD less pressing.

Alex wrote:

Thats true (btw, I keep hearing of a n*log(n) gcd - how long would that
take then? Where can I read up on it - Knuth vol.2 ?)

Richard Candall claims the fastest he's heard of is O(n*log^2(n)), i.e.
a factor of log(n) slower order than a length-n FFT, though for n  1,
this should still be much faster than O(n^2). The problem is that it's
likely highly nontrivial to code - using a well-tested library (e.g. GMP)
to effect the basic operations would probably be one's best bet if one
doesn't want to spend months coding it all oneself.

Peter Montgomery's PhD thesis mentions a fast GCD due to Moenck, but
I don't know what the precise work estimate is for that algorithm - Peter?

Other GCD-related musings I've been mulling over:

1) Is there a fast (e.g. O(n*long(n) or such) iterative method to generate
a modular inverse modulo a Mersenne or Fermat number? Of course one can
use an extended GCD algorithm to get the inverse, but I had in mind
something more elegantly, which ideally would allow fast DWT-based FFT
arithmetic to be used, i.e. which avoids having to convert an n-bit
mixed-radix DWT floating-point factoring residue to a fixed-radix integer,
do the EGCD, then convert the result back to floating form in preparation
for more DWT-style arithmetic. This question was inspired by the fact that
one can use a modular analog of the well-known Newton iterative scheme
for multiplicative inverses to get inverses modulo some desired power of 2.
For example, if x is some integer and y_n is the multiplicative inverse of
x modulo, say, 2^16, then one obtains the mod-2^32 inverse via

 y_{n+1} = y_n * (2 - x*y_n),

where all intermediate arithmetic is done modulo 2^32. Alas, this scheme
doesn't work except for moduli which are powers of 2.

2) If a fast iteration as described in (1) did exist, could one use it
to obtain a GCD in similar time?

Cheers,
-Ernst

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne Digest V1 #653

1999-10-28 Thread Mersenne Digest


Mersenne Digest   Thursday, October 28 1999   Volume 01 : Number 653




--

Date: Tue, 26 Oct 1999 12:18:46 -0400 (EDT)
From: Darxus [EMAIL PROTECTED]
Subject: Re: Mersenne: mprime startup at boot-time

On Thu, 21 Oct 1999, Lars Lindley wrote:

 Hi everyone.
 I have request for help.
 I run mprime on Linux.
 What I want to do is to have mprime automatically start when I boot
 and send the -m output to a tty or something like that. Perhaps tty8?
 Is this possible??

I created an account called "gimps" and installed mprime in it's home
directory. In my startup scripts I have:

/bin/su - gimps -c "/home/gimps/mprime -d -w/mnt/c/gimps"  /dev/tty11 21 

I actually cheated with my init scripts.  I created a file
/etc/init.d/local - and put this line in it, and then ln -s'ed it to
/etc/rc2.d/S20local through /etc/rc5.d/S20local.

On Sun, 24 Oct 1999, poke wrote:

 Sorry, -d, not -m. I'm not that familiar with the command line arguments
 of mprime. My expertise lies in the symantics of Linux.

 One way to pipe to a virtual terminal AND to a file is by using a
 combination of a tee and a redirect. Something like:

 /usr/src/mprime -d | tee -a /home/GIMPS/tracking.txt  /dev/tty6

There is no reason that starting mprime from /etc/inittab should be any
slower.  It's still the same program, being run on the same machine, just
executed by a different program.

I might switch to the inittab method, but the only good reason to do so
would be to handle the gimps client crashing, which I've never seen it do.
And the only reason not to would be the rare occasions when I want to stop
it -- I'd have to remark out the line in /etc/inittab  do "telnit q"
instead of just "killall mprime".  


I also do my mail  system log similarly.  

I have a line in my /etc/syslog.conf 

 *.* /dev/tty12

This sends everything that goes through the system log daemon to the 12th
virtual terminal, which I find very convienient.  Do a "killall -HUP
syslogd" to get it to reread the config file.


I have a ppp connection.  In my /etc/ppp/ip-up script (which is executed
whenever ppp connects), I have the following:

 /bin/su - darxus -c /home/darxus/.ip-up 

Which executes the script /home/darxus/.ip-up  as user "darxus" -- this
script contains:

 #!/bin/bash
 open -v -c 10 ~/pine.open

~/pine.open contains:

 #!/bin/bash
 export TERM=vt100
 exec ssh -t monet 'pine -i'

I'd forgotten this was so complicated... there were reasons for the way I
did it, but I've long forgotten them :)
Anyway, the end result is, whenever I bring up my ppp connection, I
automatically get pine (the mailreader I use) in my 10th virtual terminal,
over a ssh connection.  


I've also tried this with BitchX (IRC client), but it think's it's not
attached to a real terminal and runs in dumb terminal mode.  

BTW, I've done

chown darxus.darxus /dev/tty10
chown gimps.gims /dev/tty11

I also have 20 virtual terminals (plus the 3 things above + X)  my
console's text resolution set at 132x60 using lilo. I don't use X much.

__
PGP fingerprint = 03 5B 9B A0 16 33 91 2F  A5 77 BC EE 43 71 98 D4
[EMAIL PROTECTED] / http://www.op.net/~darxus
 Find the next largest prime, be famous:
http://www.mersenne.org/prime.htm


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers

--

Date: Tue, 26 Oct 1999 19:50:33 +0200
From: Alexander Kruppa [EMAIL PROTECTED]
Subject: Mersenne: (2^p-1)/p == 0 (mod 3) ?

Hello all,

a simple Number Theory question. Is always

(2^p-1) / p ,odd prime p, divisible by 3 ?

Then 2^p == 1 (mod 3p) would also hold, can this be used to improve the
efficiency of a Rabin-Miller probable prime test?

Ciao,
  Alex.
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers

--

Date: Tue, 26 Oct 1999 15:04:30 -0400 (EDT)
From: "Vincent J. Mooney Jr." [EMAIL PROTECTED]
Subject: Re: Mersenne: (2^p-1)/p == 0 (mod 3) ?

5 is an odd prime.  2^5 = 32 and minus one, is 31.  31 is not divisible by 3.
 

At 07:50 PM 10/26/99 +0200, you wrote:
Hello all,

a simple Number Theory question. Is always

(2^p-1) / p ,odd prime p, divisible by 3 ?

Then 2^p == 1 (mod 3p) would also hold, can this be used to improve the
efficiency of a Rabin-Miller probable prime test?

Ciao,
  Alex.
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


_
Unsubscribe  list info -- 

Re: Mersenne: GrOGfest '99 reminder

1999-10-28 Thread Spike Jones

But if I were to be sprayed with beer by Knuth I would consider it
the greatest day of my life!  If I were to see him I fear I would fall
prostrate upon the earth, crying "Im unworthy!  I sck!"

{8^D  spike

(:  .oot gniddik m'I)

[EMAIL PROTECTED] wrote:

 I wanted to invite Don Knuth, but Luke told me that last time they were
 at a party together, DEK grabbed a half-empty minikeg of Lo"wenbra"u
 and was running around with it under his arm, spraying partygoers with
 beer and shouting, "you're all wet!" I think we'll just send him a card and
 photo.

 (: .esruoc fo ,gniddik m`I)

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: PrimeDC.exe (was Crazy thought of the day...)

1999-10-28 Thread Glenn

"Blosser, Jeremy" wrote:

 Seeing as how the Sega Dreamcast was officially released here in the
U.S.
 today... and the list has been REALLY quiet (due to school starting?),
I
 looked at the specs... and sure 'nuff, apparently, the Hitachi SH4 can
do 2
 FPU ops per cycle (single or double precision) and has the good ole
SIMD
 architecture that the PIII and 3DNow chips seem to support, with
128-bit
 ops, but again, only for 32-bit single precision floating point
numbers...
 so bummer there... but the superscalar FPU is nice...

Check out http://www.canadawired.com/~gvink/Sega/Tech/tech_cpu3.html
for a good overview of this mighty cpu as used in the Dreamcast (1400
MFLOPS -900 MFLOPS sustained with external memory) .
and http://www.canadawired.com/~gvink/Sega/Tech/technical_index.html
for a good overview of the Dreamcast's technical specs.

 Anyway, since the Dreamcast runs WinCE, a port to it would be pretty
easy...
 the only problem I see is storing intermediate files, since the
"memory
 packs" are only 128k or something right now... I suppose you could
store it
 somewhere on the 'net by using its built-in 56k modem or whatever...

see http://msdn.microsoft.com/cetools/platform/factsheet.asp for info on
WinCE
for the Dreamcast.

There are now 4 Mb "memory packs" available so storing intermediate
files most
probably wouldn't be too much of a problem. The dreamcast has 16 Mb main
memory - would this be enough ?
PrimeDC.exe ? Someone should make it happen !

Glenn





_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Updated web pages; new factor counts percentages

1999-10-28 Thread Will Edgington


I've updated my web pages Yet Again, including adding some quick links
at the top of mersenne.html to the other Mersenne related files on my
web site.

The 1stfacnt.txt file is gone; I've split it into facntHU.txt (for
incompletely factored Mersennes) and facntD.txt (for completely
factored Mersennes) and the counts are no longer only for first
factors, but for second factors, third factors, etc., as well.  The
files also provide counts for all exponents, for only prime exponents,
and percentages with at least one, at least two, etc., factors out of
the exponents for which I have any data at all, rather than just
counts of factors.

There's a significant drop in the percentages of prime exponent
Mersennes with known factors for exponents above 10 million, but that
may simply be because GIMPS and Primenet factoring haven't done much
for exponents that large yet.  Comments?

All Mersenne numbers with prime exponents less than 80 million have
been trial factored to at least 2^41.  All Mersenne numbers with
exponents under 24687 have had primality checks of their cofactors.
All SPRP cofactors with less than 836 decimal digits have primality
proofs (with a few exceptions; the binary version of ECPP that I have
crashes for some numbers).

Will

http://www.garlic.com/~wedgingt/mersenne.html
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers