Mersenne Digest         Friday, June 18 1999         Volume 01 : Number 583




----------------------------------------------------------------------

Date: Thu, 17 Jun 1999 23:38:11 +0100
From: Nick Craig-Wood <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Thoughts on Merced / IA-64

On Thu, Jun 17, 1999 at 02:08:29PM -0700, Luke Welsh wrote:
> BTW, has anybody investigated this package:
> 
>     http://clisp.cons.org/~haible/packages-cln-README.html

Yes I have.

It is a very thorough C++ class library for number manipulation.  It
has an O(n log n) multiply.  You could implement a Lucas Lehmer tester
using it in about a dozen lines of code but you'd find that it was
some factor slower than Prime95 (3 rings a bell but I may be wrong).

- -- 
Nick Craig-Wood
[EMAIL PROTECTED]
http://www.axis.demon.co.uk/
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 23:51:19 +0100
From: Nick Craig-Wood <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Thoughts on Merced / IA-64

On Thu, Jun 17, 1999 at 11:09:12PM +0100, Brian J. Beesley wrote:
> When you do your NTT, you're going to need at least twice as many 
> bits in the elements of the transform as there are bits in the number 
> you're testing (because you're going to want to square the values in 
> the elements, without any bits falling off the more significant end). 
> If you're working into millions of bits, I think this forces you to 
> use (at least) 64-bit elements. That scuppers any plans to use MMX 
> instructions.

That is correct - for any reasonable length FFT or NTT you will need
at least 64 bit elements.

You can synthesise these elements by doing two or three 32 bit
transforms and combining them with the chinese remainder theorem.  I
experimented with this on the ARM and came to the conclusion that
doing it like this was slower because the operation count was larger.
However if there is a really significant speed up by using the MMX
instructions then it may be practical to combine these single
precision NTTs.

- -- 
Nick Craig-Wood
[EMAIL PROTECTED]
http://www.axis.demon.co.uk/
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 16:50:41 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Thoughts on Merced / IA-64

> But, you can do it in integer if you have a processor with 1)
> enough integer
> registers 2) wide registers and 3) fast/pipelined multiply--which IA-64 is
> supposed to have.  The floating point version was a cluge to make
> up for an,
> uhhh, *interesting* processor archetecture.  It shouldn't make everyone
> think that it's always the best way to do things.

That's kind of what I was driving at.  With 128 64 bit multi-purpose
registers, register rotation, etc.  The register rotation should help by not
making you unroll *all* your loops *all* the time.  I'm sure it'd still help
to unroll them anyway, but that's an opinion.  Oh...sure enough, there's an
example where they show that you get a speedup in an unrolled loop, but you
do save even more cycles in a partially unrolled loop using register
rotation.

I notice that the imul actually uses the FPU which makes me wonder if imul
would really be any better than fpmul (which can be parallelized - fpmpy).
Fused-multiply-and-add commands (fma) could help with some code, but that's
a guess on my part.

On the other hand, I do see that IA-64 *does* do 64bit*64bit=128bit imul,
though they do indeed use the FP registers, and it's the FPU core doing all
the work.  "The product of 2 64bit significands is added to the third 64bit
significand (zero extended) to produce a 128bit result."

Additionally, there is support for quad-precision FP "in software" (just
above the microcode I'd guess?  Or do they mean in ASM?).  Certainly
quad-precision (128bits), if it were fast enough, would be a lot better than
extended double (80bits).  I wonder about the speed of that though, and what
they mean by "in software".

Lost my train of thought...the power went where I work for a couple hours
just now....and I think I'd better end it here! :-)

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 17:07:01 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: Mersenne: Custom FPGA design

> > BTW - Read http://www.cnn.com/TECH/computing/9906/15/supercomp.idg/
>
> I am reminded of hype over the "thinking machines" parallel computer.
>
> How difficult is it to write for an FPGA array?  Do tools exist to
> compile a C program into an FPGA configuration?  Has BEos been ported
> to it?

Well...with today's newest FPGA's in the range of 2 million gates, you can
certainly do WONDERFUL things with them.  I had been discussing this off the
list and came up with the idea that it's not entirely strange to think that
perhaps I could design a custom FPGA to do high-speed FFT work.  I ordered
an eval softare package from Viewlogic, and now if I could just find my old
Xilinx test bed...crud.

Anyway, one big problem with using FPGA's "on the fly" is that you'd really
need to have a "precompiled" library of routes for what you want, and you'd
really have to be REALLY good at designing logic flow.  It's basically
(quite literally) like designing your own CPU.

I had the distinct pleasure of designing my own CPU (albeit, a 4 bit doodad)
some years ago, and one major hurdle to any real-time use is that it can
take hours to route your design to a device.  Even on the latest fast
computers, you might still only get 100,000 gates per hour during the
routing phase of the design.  Then you have to "burn" (program) the routes
to the device, and usually there's a lot of testing using JEDEC and what not
before you're sure the thing will actually work right.  Designing your own
chips is GREAT if you have the upfront time, but there is so much work
involved (even with my 4 bit CPU) that you'd really need to have an EE
degree to do any of that (good thing I do! :-P )

Anyway, I'll have to get my eval of Viewlogic and see what it would take to
do an FFT hardware device.  Or, if it'd be easier, use my bro's idea of NTT
to do the same thing, since designing an FPU unit would be intense.  After
my experience with logic design on a small scale, I have MUCH more respect
for the folks at Intel/AMD/etc. who design the CPU's.  Far out stuff.

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 20:09:38 -0400 (EDT)
From: "David A. Miller" <[EMAIL PROTECTED]>
Subject: Mersenne: Windows NT question

Here at the University of Michigan, there are computer labs with Dell
Pentium II systems running Windows NT 4.0. Each student has a little
online file space connected to the Sun login machines; I believe it uses
the Andrew File System (AFS). This file space is made available as a
network drive on the Windows NT systems, and can be used as an ordinary
drive by most Windows programs.

But when I try to run Prime95 directly from a directory in my online file
space, I get this error message:

"The application failed to initialize properly (0xc0000022). Click on OK
to terminate the application."

Sometimes the program starts without error, but usually I get the message
above, and so to get ECM work done while I'm at the computer lab I have
to create a directory on the hard drive and move everything back when I'm 
done, which is inconvenient. Does anyone know what the problem might be?


David A. Miller
Rumors of my existence have been greatly exaggerated.

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 20:46:31 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Mersenne Primes - what'd you expect?

<<I still believe that the number is finite, in contrast to what appears to 
be the majority view>>
The "majority view" is the way it is because a number of Darn Good (TM) 
heuristic arguments have been made that the number of Mersenne Primes is 
infinite, just like Darn Good (TM) heuristic arguments exist that the number 
of Fermat primes is finite (and probably =4).

TI-81s run on 2MHz Z80s, but TI-85s run on 6MHz Z80s. And it is possible to 
overclock them something like 6x. Whoo! Now, you couldn't run a LL test 
program on a TI-85, but it might just be possible on a TI-92+. They have a 10 
MHz Motorola 68000 processor, and something like 512K of memory. You could 
build yourself one of those memory expanders that have been designed for the 
TI-92+, and BOOM, instant LL tester. Or even factoring machine. Could you 
factor a Mersenne number without storing it in memory? (Answer: I don't 
*think* so....) Ptoo bad. If we could factor Mersenne numbers on an 
unmodified TI-92+, then there'd be a lot of people who'd run that program. 
TIs actually aren't that useless. You can do RSA cryptography on a TI-92, 
92+, or 89.

<<I'm not so bad off>>
My TI-85 goes with me wherever I go - got a problem with that? :-D

Well, that's it.
S.T.L.
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 13:00:20 +1200
From: "Halliday, Ian" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Finite or infinite?

Merely expressing an opinion as to whether or not you think there are an
infinite or finite number of Mersenne primes doesn't add anything to the
discussion unless you can furnish some argument one way or the other. As
with many issues in pure mathematics, it is unlikely (but not impossible)
that a proof will be found one way or the other.

I acknowledge Euclid's proof of an infinity of primes, cited by Michael
Clark, but do not see any compelling evidence in this pointing towards an
infinity of Mersenne primes. The rarity of these numbers is part of what
leads me to suppose that they are finite in number, but I concede that
numerical evidence does not carry much weight. For example, if I were to
claim that all integers are below 2 raised to the power of M37, I could
exhibit a huge number of examples of integers supporting this point of view,
but anybody could furnish as many counterexamples as needed to refute this
extraordinary claim. As we know, one counterexample suffices.
Nevertheless, my view is that the rarity of Mersenne primes points towards
finiteness.

If we look at Mersenne primes as a subset of the integers, we have a
fraction which is truly minuscule. If we look at them as a fraction of all
primes, we're not much better off. Progressively considering them as a
proportion of { 2^n - 1 } and { 2^p - 1 } they start to become slightly
noticeable. I suppose that the view of finite v infinite is partly related
to what one sees Mersenne primes as a subset or special case of.

Brian Beesley wrote intelligently on the subject: I thank him for conceding
that this is probably a matter for viewpoint rather than proof at the
moment. He is citing circumstantial evidence just as much as I am, however.

Certainly the earlier correspondent seemed to have eccentric ideas in many
areas of mathematics. Infinity is hard for us to understand because we are,
in some ways, only finite ourselves.

I am aware that the chances of a false positive are extremely small and look
forward with eagerness to somebody being able to tell us the value of M38. I
mentioned the possibility only to pre-empt any claim "there are only 37 of
the things - you haven't proved M38 yet". I don't see why anybody would try
to nobble the server either, since such a person would indeed be found out.

Personally, I participate in the search for mathematical reasons and not for
the money. Winning the Mersenne prize offers much less money than the
average state lottery, but more long-term acknowledgement. The average
reader of this list can probably name several Mersenne discoverers but
probably not the winners of recent lotteries. I've been looking for Mersenne
primes through GIMPS since before there was any mention of prizes, recalling
the "good old days" when I checked the whole range of 928,000 to 929,000,
mailed the results to George and visited a website at
ourworld.compuserve.com, the full URL of which nobody ever seemed to
remember. Now I'm in the same boat as S Gunderson, who has switched to
double checking because he doesn't like having to wait months for a result.
It's encouraging to know I'm not the only one who thinks like that.

Regards,

Ian

Ian W Halliday, BA Hons, MIMIS, ANZCS, CTM
P O Box 5472, Wellington 6040, New Zealand

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 21:24:36 EDT
From: Foghorn Leghorn <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Mersenne Primes - what'd you expect?

Could you
>factor a Mersenne number without storing it in memory? (Answer: I don't
>*think* so....) Ptoo bad. If we could factor Mersenne numbers on an
>unmodified TI-92+, then there'd be a lot of people who'd run that program.

Uh, that's exactly what Prime95 does. To test whether a potential factor f 
divides 2^p-1, we use a standard binary powering algorithm to compute 2^p 
modulo f; it requires roughly log2(p) operations on numbers no bigger than 
f, and we never have to store the full representation of 2^p-1. I'm sure 
that this could be done on a TI. I don't know about ECM though.


_______________________________________________________________
Get Free Email and Do More On The Web. Visit http://www.msn.com
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 22:16:46 -0400 (EDT)
From: lrwiman <[EMAIL PROTECTED]>
Subject: Re:  Mersenne: Mersenne Primes - what'd you expect?

>You could build yourself one of those memory expanders that have been designed
>for the TI-92+, and BOOM, instant LL tester. Or even factoring machine. Could
>you factor a Mersenne number without storing it in memory? (Answer: I don't
>*think* so....) Ptoo bad.

Yes, actually you can.  By using an algorithm (I think Donald Knuth invented
it, but it *is* in TAOCP vol II).  Its not all that complex here is a calc
program to find a^b mod c (which I think explains things better than 2 pages
of mathematical ranting)

define modpow(a,b,c) {
        local res;
        res=1;
        while (b>0) {
                if (odd(b)) {  
                        res=a*res; 
/* res=res*(a^(2^n)) whenever the nth binary digit of b is 1 everything mod c*/
                        if (res>c) res=res%c;
                }
                a=a^2;
                if (a>c) a=a%c;
                b=(b-odd(b))/2;
                }
        ;
        return res;
}

Note that if 2^p==1 mod c then c is a factor.  Also note that no number ever
gets bigger than c^2 (keen huh?)
So it is not only possible to find a factor without holding the mersenne number
in memory, but it is considerably faster.

However, I cannot think of any way to do an LL test without storing the
number in memory.  Is there way?

- -Lucas Wiman
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 22:13:58 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Mersenne Primes - what'd you expect?

At 08:46 PM 6/17/99 -0400, [EMAIL PROTECTED] wrote:

>The "majority view" is the way it is because a number of Darn Good (TM) 
>heuristic arguments have been made that the number of Mersenne Primes is 
>infinite, 

Furthermore, I haven't seen any (good) argument at all as to why they should be
only a finite number of Mersenne primes.

+----------------------------------------------+
| Jud "program first and think later" McCranie |
+----------------------------------------------+


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 22:46:42 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Finite or infinite?

At 01:00 PM 6/18/99 +1200, Halliday, Ian wrote:
>Merely expressing an opinion as to whether or not you think there are an
>infinite or finite number of Mersenne primes doesn't add anything to the
>discussion unless you can furnish some argument one way or the other. 


There are two main reasons:

(1) if you consider the prime number theorem to approximate the probability
that 2^p-1 is prime and sum that over all primes p, you get an infinite number
which means that you expect an infinite number of Mersenne primes.

(2) there are several conjectures concerning the growth rate of successive
Mersenne primes.  They all suggest that on average, one exponent resulting in a
Mersenne prime is no more than twice the previous one.  This implies an
infinite number of Mersenne primes.  The known Mersenne primes are in very good
agreement with the conjecture that, on the average, an exponent resulting in a
Mersenne prime is about 3/2 as large as the previous one.  That, of course,
would imply an infinite number of Mersenne primes.

+----------------------------------------------+
| Jud "program first and think later" McCranie |
+----------------------------------------------+


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 22:58:59 -0500 (CDT)
From: Robert Stalzer <[EMAIL PROTECTED]>
Subject: Mersenne: Team Reports

Once I've 'Cleared' an unwanted exponent from my to-do list ('oops, didn't
want to do double-checks') how do I banish the outcast exponent from my
team's report?  Can another volunteer be assigned the exponent
automatically or must we wait for the exponent to expire (a lengthy wait)?

Robert Stalzer
[EMAIL PROTECTED]

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:08:42 -0400 (EDT)
From: lrwiman <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Finite or infinite?

>(1) if you consider the prime number theorem to approximate the probability
>that 2^p-1 is prime and sum that over all primes p, you get an infinite number
>which means that you expect an infinite number of Mersenne primes.

True, the probability of a given n being prime is ~1/log(n), and 
the sum from 1 to infinity of 1/log(2^p-1)~=log(2)*sum from 1 to infinity 1/p
which euler proved is infinite.  I think that the probability of 2^p-1 being
prime is considerably higher than most numbers that big, because they can
only be divisable by numbers of the form 2*k*p+1, and they must be ==+/-1
mod 8.  Thus 
1/4*(1/(2*p))*sqrt(2^p-1)~=2^(p/2-3)/p possible divisors (less when you look
at the probability that 2*k*p+1 is prime) as opposed to 
~sqrt(2^p)/log(sqrt(2^p))=2^(p/2)/(log(2)*p/2)
which is 2^3*2/log(2)~=23 times fewer possible factors than a normal number
that size (it's not much, but hey, it's something).  

>(2) there are several conjectures concerning the growth rate of successive
>Mersenne primes.  They all suggest that on average, one exponent resulting in
>a Mersenne prime is no more than twice the previous one.  This implies an
>infinite number of Mersenne primes.  The known Mersenne primes are in very good
>agreement with the conjecture that, on the average, an exponent resulting in a
>Mersenne prime is about 3/2 as large as the previous one.  That, of course,
>would imply an infinite number of Mersenne primes.

Well, I don't know about that.  Using conjectured behavior of mersenne primes
to argue other conjectures...  We must tread carefully...

- -Lucas Wiman




________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:38:54 -0400
From: "Rick Pali" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Team Reports

From: Robert Stalzer

> Once I've 'Cleared' an unwanted exponent from my to-do list
> ('oops, didn't want to do double-checks') how do I banish the
> outcast exponent from my team's report?

The easiest way is to make sure that your instance of the prime software
has as many days of work as you've specified, close the software, then add
the unwanted exponent at the *end* of your worktodo.ini file. When you
restart the software it will see that it's got 'too much work' and dump
the last one on the list.

That's the way I've always done it. Though my days of work setting is set
to '1' so it's really easy. :-)


> Can another volunteer be assigned the exponent automatically
> or must we wait for the exponent to expire (a lengthy wait)?

As long as the exponent appears on your team's individual report, it's
signed out to you and cannot be reassigned.

Rick.
- -----
[EMAIL PROTECTED]
http://www.alienshore.com/

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Thu, 17 Jun 1999 23:09:07 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Team Reports

> Once I've 'Cleared' an unwanted exponent from my to-do list ('oops, didn't
> want to do double-checks') how do I banish the outcast exponent from my
> team's report?  Can another volunteer be assigned the exponent
> automatically or must we wait for the exponent to expire (a lengthy wait)?

Go to http://entropia.com/ips/manualtests.shtml and manually release the
exponent.

Simple matter of putting in your account and password, then the exponent you
want cleared, or the whole "DoubleCheck=xxx" or "Test=xxx" will work also.

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:30:19 +0200
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Thoughts on Merced / IA-64

>What's NTT? And is DWT Discrete Walsh Transform?

I don't have a clue about how NTTs actually are done, but I think the words
`Numerical' and `Transform' accounts for one N and one T. (I could always
look it up in the list archives, but I'm too lazy.) DWT does as far as I
know stand for `Discrete Weighted Transform', discovered by our great heroes
Crandall and Fagin.

(Of course, George is the greatest hero for all our GIMPSers, perhaps we
need to make a `worship list'. OK, OK, I'm tired. Don't listen to me.)

/* Steinar */
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:24:46 +0200
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Thoughts on Merced / IA-64

On Thu, Jun 17, 1999 at 01:11:09PM -0400, Jud McCranie wrote:
>The IA-64 sounds like a monster.  I'll want one, but they'll probably be too
>expensive for a few years.  (It happens over and over - "no person will need
>that much on their desktop.")

In the case of the 386, there was "no person will need that at all" :-)
That's why FRACTINT (a DOS fractal program, also ported to Windows, Linux/Unix,
and I believe some other platforms too) introduced arbitrary precision zooming:
To keep processors busy in the days where they could look back at `Grandpa's
days where they *only* had Pentium processors'.

/* Steinar */
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:33:55 +0200
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Re: OT: Mersenne: ARM Licenses

On Thu, Jun 17, 1999 at 07:48:03PM +0000, David L. Nicol wrote:
>How difficult is it to write for an FPGA array?

I would guess not so difficult, as long as the task _is_ easily parallelizable.

>Do tools exist to compile a C program into an FPGA configuration?

I would guess that you couldn't run _any_ program into it and get mass-
parallelization out, but, as a wild guess, I would believe they've made
a C compiler with hooks into the right libraries.

>Has BEos been ported to it?

What a funny question :-) (Perhaps I'm only thinking so because I'm a Linux
user, and more used to that porting of open-source stuff is easier than closed-
source stuff. Of course, since all I've ever _heard_ of BeOS is its name, it
might be open-sourced for all I know.) To answer your question: That would be
very unlikely.

/* Steinar */
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:03:49 +0200
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Thoughts on Merced / IA-64

On Thu, Jun 17, 1999 at 11:02:05AM -0500, Willmore, David wrote:
>The floating point version was a cluge to make up for an,
>uhhh, *interesting* processor archetecture.  It shouldn't make everyone
>think that it's always the best way to do things.

Perhaps not. Not to take any sides in this discussion, but when I first
joined GIMPS, I had a Cyrix 6x86 CPU. It was (is) _really_ poor for FPU
work, so I tried to make George include an integer algorithm. His answer
was that even though he _had_ an old integer version, it was rougly
_seven_ times as slow as the FPU version. Now, we must try to find out
how much the difference is (and of course, in `whose' favour (excuse me
if my English is bad at the time of writing) it is) on the IA-64. I
think I've read about Intel's plans. Quick summary:

1. Merced comes out. So expensive, it's targetted at the server market
   only.
2. Foster comes out. Still being IA-32 compatible, it _matches_ the
   performance of the Merced!
3. Next generation IA-64 (can't remember the name) comes out. The price
   is now reasonable enough for desktop markets. (Although I would guess
   still rather expensive, and targetted at the high-end desktop market...)

That was just some facts to educate you all ;-)
 
>> +----------------------------------------------+
>> | Jud "program first and think later" McCranie |
>> +----------------------------------------------+
>*laugh*  Uh, hmmm, think now? :)

I think that resembles to a great deal my own programming style. I happen
to write things not `top-down' or 'bottom-up', but more `left-right'. When
I happen to get an idea I must think about, I actually have to stand up
from the chair and walk about, so I can think of the idea instead of
spitting out even more low-quality code... Good George is programming this
thing, and not me :-)

/* Steinar */
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 00:22:40 +0200
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Thoughts on Merced / IA-64

On Thu, Jun 17, 1999 at 09:21:57AM -0700, John R Pierce wrote:
>where Z is a 256 bit 'accumulator'...

And where are you going to find a 256 bit add instruction? :-)

/* Steinar */
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 08:10:13 +0000 (GMT)
From: Henrik Olsen <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Team Reports

On Fri, 18 Jun 1999, Rick Pali wrote:
> From: Robert Stalzer
> 
> > Once I've 'Cleared' an unwanted exponent from my to-do list
> > ('oops, didn't want to do double-checks') how do I banish the
> > outcast exponent from my team's report?
> 
> The easiest way is to make sure that your instance of the prime software
> has as many days of work as you've specified, close the software, then add
> the unwanted exponent at the *end* of your worktodo.ini file. When you
> restart the software it will see that it's got 'too much work' and dump
> the last one on the list.
> 
> That's the way I've always done it. Though my days of work setting is set
> to '1' so it's really easy. :-)
> 
> 
> > Can another volunteer be assigned the exponent automatically
> > or must we wait for the exponent to expire (a lengthy wait)?
> 
> As long as the exponent appears on your team's individual report, it's
> signed out to you and cannot be reassigned.
> 
> Rick.
Signed out to your team that is. If you add it to the worktodo.ini file of
another machine in your team, it will happily start churning on it, since
it's still assigned to your team, the only real result will be that the
next time it reports in, the exponent will get reported as assigned to the
new machine.

I've used this technique before, to get a batch of exponents in one gulp,
with one machine, then distribute the work between my different machines
as I consider best.

- -- 
Henrik Olsen,  Dawn Solutions I/S       URL=http://www.iaeste.dk/~henrik/
  Animal behaviour is best described by the four F's
  Feed, Fight, Flee and Reproduce

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 10:51:35 -0400
From: Jeff Woods <[EMAIL PROTECTED]>
Subject: Mersenne: AMD K7 to be king of the hill?

Will v18.1 run on an AMD K7 at appropriately fast speeds without 
modification?  Looks like the K7 will be the heat for at least several 
months....

http://www.news.com/News/Item/0,4,38021,00.html?st.ne.fd.gif.d

Excerpts:

The delay to Coppermine--a high-performance version of the Pentium 
III--means that the Intel chip will not appear until November. That may 
mean that the chip won't appear in many PCs in 1999, since new systems 
don't often come out so late in the year.

- -------------------

Under different circumstances, the delays might be irrelevant, but AMD is 
currently preparing to release its K7 processor. The chip will be announced 
later this month and start to roll out in volumes later in the summer at 
speeds of 500 MHz, 550 MHz, and 600 MHz, said several sources. Benchmarks 
released by AMD recently show that the chip will outperform the Pentium III 
and even the more-upscale Xeon processor on certain benchmarks.

"It does seem likely that the K7 will be no slower than the Pentium III. It 
is also clear that, unlike the situation with the K6, the K7 will be no 
laggard in floating point and multimedia performance," wrote Michael Slater 
in a recent Microprocessor Watch newsletter.


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 12:08:17 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Finite or infinite?

At 12:08 AM 6/18/99 -0400, lrwiman wrote:
>
>True, the probability of a given n being prime is ~1/log(n), and 
>the sum from 1 to infinity of 1/log(2^p-1)~=log(2)*sum from 1 to infinity 1/p
>which euler proved is infinite.  I think that the probability of 2^p-1 being
>prime is considerably higher than most numbers that big, because they can
>only be divisable by numbers of the form 2*k*p+1, 

That's true, so Mersenne numbers are even more likely to be prime, due to the
limited number of potential factors.

>Well, I don't know about that.  Using conjectured behavior of Mersenne primes
>to argue other conjectures... 

They fit the formula that the nth Mersenne exponent is approximately (3/2)^n
pretty well.  That isn't a proof, of course, but it is a strong suggestion.


+----------------------------------------------+
| Jud "program first and think later" McCranie |
+----------------------------------------------+


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 12:05:41 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Finite or infinite?

At 01:00 PM 6/18/99 +1200, Halliday, Ian wrote:
>
>I acknowledge Euclid's proof of an infinity of primes, cited by Michael
>Clark, but do not see any compelling evidence in this pointing towards an
>infinity of Mersenne primes. The rarity of these numbers is part of what
>leads me to suppose that they are finite in number,

That doesn't make much sense.  Numbers of the form 10^10^n are much more rare
than Mersenne primes, but there are an infinite number of them.

> Infinity is hard for us to understand because we are,
>in some ways, only finite ourselves.

Have you been talking to my grandfather?  (below)



+------------------------------------------------------------+
|             Jud McCranie                                   |
|                                                            |
| "The mind is finite so it cannot understand the infinite." |
|     -- G. F. McCranie, Jr.                                 |
| "It depends on how finite your mind is." -- Jud McCranie   |
+------------------------------------------------------------+

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 12:34:24 -0700 (PDT)
From: poke <[EMAIL PROTECTED]>
Subject: Re: OT: Mersenne: ARM Licenses

My understanding is that it comes with a language of its own. My
impression is that it is an icon based language. Kind of like connect the
blocks into a flow chart of some sort and hit "GO".

- -Chuck


On Thu, 17 Jun 1999, David L. Nicol wrote:

> Aaron Blosser wrote:
> 
> > BTW - Read http://www.cnn.com/TECH/computing/9906/15/supercomp.idg/
> 
> I am reminded of hype over the "thinking machines" parallel computer.
> 
> How difficult is it to write for an FPGA array?  Do tools exist to
> compile a C program into an FPGA configuration?  Has BEos been ported
> to it?
> 
> 
> (Have just posted the URL to the egcs developers mailing list, expecting
> heavy jihads to result)
> 
> ________________________________________________________________________
>   David Nicol 816.235.1187 UMKC Network Operations [EMAIL PROTECTED]
>   "It is a computer under my desk, nobody but me uses it" -- J. Levine
> ________________________________________________________________
> Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
> 

 --
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: WWW: http://www.silverlink.net/poke   :
: E-Mail: [EMAIL PROTECTED]         :
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: Ask Mike! Aviation's response to Dear :
: Abby. http://www.avstarair.com        : 
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 22:04:58 +0200
From: Sylvain PEREZ <[EMAIL PROTECTED]>
Subject: Mersenne: Hello from Paris - France

I'm Sylvain Perez, I do take care of the French version of GIMPS.

If any of you need francophone support, please visit 
http://www.entropia.com/gimps/fr, or send me an email.

About cool guys that like hot chips, what do you think about those pages : 
http://www.agaweb.com/coolcpu/, it seems to me to have good info for cpu 
cooling.

OK, now take care of you and of your prime95s.

Sylvain
Participer à la recherche de très grands Nombres Premiers,
sérieusement, pour le plaisir, ou les prix ! ...
<http://www.entropia.com/gimps/fr>

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

Date: Fri, 18 Jun 1999 14:45:45 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: OT: Mersenne: ARM Licenses

> My understanding is that it comes with a language of its own. My
> impression is that it is an icon based language. Kind of like connect the
> blocks into a flow chart of some sort and hit "GO".

I'll try and give a cursory explanation of what you can do with FPGA's.
Bear in mind, I haven't done FPGA work in nearly 5 years, but not much could
change.

I'm sure we all understand boolean logic...and, or, not, etc. (as well as
their counterparts nand, nor, xor, etc).

You can do all sorts of wonderful logic designs using handfuls of "74"
chips, the basic building blocks of logic design.  You can even design
memory cells using latches and the sort.

An FPGA is essentially a WHOLE bunch of logic gates on a single chip, and
furthermore, they are programmable.  You can talk about FPGA's in terms of
the number of gates and numbers of pads (I/O connects for instance) and
such.  These gates are grouped into CLB's (configurable logic block I
think?).  On a chip with 1M gates, you maybe have around 27-28 thousand of
these CLB's.  Each CLB can be configured as basically any kind of logic gate
you want.  Configure it as a nand gate, or a xor, or whatever.

Besides the CLB's and I/O stuff, you also need to worry about routing.  On
an FPGA, there are all these CLB's, but there are limited ways to route
signals between them.

One problem I used to have (on occasion) was that my design would work great
on a simulator, but when it came time to actually route the design, I'd find
that I'd run out of routes.  I might have to "move" certain functions to
other parts of the chip that had more interconnects available.  Other
designs like PLD's have n-way routing, so each block can connect to any
other one...simplifies the routing but there are generally less blocks
available.  Hmmm...

What you can do is configure some of the gates as memory cells (J-K or S-R
latches), or use external RAM (which I did in my design...we had Xilinx
chips with very few blocks) to hold data.  Then configure the rest of the
blocks as your logic.

Another part of the routing process is determining the load on "bus"
signals, like the system clock for instance.  Too many devices running off
the clock means you're waveform will distort and you'll wind up with a bad
signal.

This routing part can take a while...be prepared to go grab a snack or two
while it churns out the design.  Programming of the device is done serially,
but it's generally not too much of a wait for that part.  Then when you
first run it, non-simulated, you get to find out if you screwed up any
timings (registers loading in the wrong order, bus collisions, etc.).  VERY
fun stuff! :-)

When doing FPGA design, you need to work out your timing signals, design
your own registers, generally a good idea to build a state machine for the
timings (kind of the master control), be sure and grey code everything in
the timings to prevent race conditions, etc.

It really isn't easy, but you can do AMAZING things.  If you wanted to build
a 256 bit multiplier, well, it could be done.  The real trick is to KNOW how
to optimize logic tables and design.  Any Joe Schmoe can build an n-bit
multiplier, but getting it done optimally is something else entirely.
That's the tricky part, and there are all sorts of cool tricks (the same
ones that programmers would know about in their psuedo-code) for getting it
done.  The fun part is that since you're designing the hardware, you can put
in your own clever "cheats" in hardware.  For a ROR, for instance, instead
of actually reading in each bit and moving it over to the right one, and
using a temporary bit to hold the extra along the way, you could just copy
the byte to another register with a 1-bit offset, then rename the registers,
making it faster (that's just a dumb example, but you get the idea).

Doing something like an add in hardware is quite easy.  I'm still not
entirely sure how a "real" CPU does multiplies and divides so DARN FAST in
the hardware though...  I mean, I could do a multiply by simply doing
multiple adds, but that would be pretty slow.

Anyone who has ever delved into advanced microprocessor designs probably
knows what I'm talking about.  Intel uses some pretty clever stuff to get
extra speed from their design, at the price of using more silicon.

It took me a good 2 months to design a 4 bit CPU (with 3 registers, A, B and
an accumulator) that could do only 8 instructions like add, sub, jmp, etc.
Sure, I was learning at the time, but it's complicated stuff!  (PS, I
cheated on some parts by using the bus as a "temporary" register.  It really
sped up a few parts without adding another register.  My prof. thought it
was clever, though he wasn't terribly crazy about it...I had to rearrange
some timings to avoid bus collisions and at that point, the examples he gave
in class no longer applied to my design.  Still, I'm bragging here, but my
design was faster than the others.)

I would still like to see just how hard it would be to design an n-bit
multiply with add or something in an FPGA...just send the data from the
computer to the FPGA and get the result back, maybe using the PCI bus.  I
sure wish I had my computers back so I could go over my old designs and
"refresh" my brain cells! :-)  The beauty is that once you have a small bit
design figured out, it's not terribly more complicated to add more bits to
the data design.  And if it were a dedicated device that only did one
instruction, but did it fast, that would simplify the state machine design
greatly!

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

End of Mersenne Digest V1 #583
******************************

Reply via email to