Re: Mersenne: double-check mismatches

2004-01-17 Thread Brian J. Beesley
On Saturday 17 January 2004 02:32, Daran wrote:
> On Thu, Jan 15, 2004 at 07:15:46PM +0000, Brian J. Beesley wrote:
> > ...matching
> > residuals mean that the chance of an error getting into the database as a
> > result of a computational error is of the order of 1 in 10^20.
>
> That's per exponent, isn't it?  The chance that one of the roughly quarter
> million status-doublechecked exponents being in error is about five orders
> of magnitudes higher.

Sure. That's why I ran the project to triple-check a not inconsiderable 
number of smaller exponents where one (in some cases both) of the residues 
was reported to less than 64 bits, usually only 16. No discrepancies were 
discovered.
>
> Still acceptible, or at least a minor consern in comparison to the other
> security issues.
>
It's easy enough - and computationally exceedingly cheap - to report more 
residue bits but, as you say, other issues are not so easy to fix.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Re: double-check mismatches

2004-01-16 Thread Brian J. Beesley
On Friday 16 January 2004 06:10, Max wrote:
>
> It would be also interesting to learn how often the first run is bad, and
> how often is the second?

Yes - I don't think this information is readily available, though sometimes 
you can infer the order of completion from the program version number.

To do the job properly either the "bad" database would need an extra field 
(date of submission) or a complete set of "cleared.txt" files would be 
required - and this would miss any results submitted manually.
>
> It seems to me that first run should be bad more often than the second. Is
> that true? My reasoning is that first run is usually done on modern
> (fast/overclocked/unstable/etc) hardware while the second one is done on
> the old/slow but more stable/trusted hardware.

Interesting theory - but surely the error rate would be expected to be 
proportional to the run length, which would tend to make fast hardware appear 
to be relatively more reliable - conversely smaller / lower power components 
(required to achieve high speed) would be more subject to quantum tunnelling 
errors. For those who think in terms of cosmic rays, this means a less 
energetic particle hit will be enough to flip the state of a bit.

In any case the exponents ~10,000,000 which are being double checked now were 
originally tested on "leading edge" hardware about 4 years ago, when 
overclocking was by no means unknown but was often done without the sort of 
sophisticated cooling which is readily available these days.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: double-check mismatches

2004-01-15 Thread Brian J. Beesley
On Thursday 15 January 2004 01:00, Max wrote:
> Hello!
>
> Is any statistix on double-check mismatches available?
> How often this happens?

~2% of all runs are bad.
>
> If my result get mismatch with some other's will I get any notice about
> that?

No. But you can check the database - any results in the file "bad" have been 
rejected because of a residual mismatch.

> Can I learn which of my results were confirmed by others?

Yes. Check the "lucas_v" database file.
>
> P.S. Having periodical problems with overheating (coolers become dusty)
> causing ``roundoff'' etc. hardware errors in mprime,

Can you not run a hardware monitor program based on lm_sensors so that an 
alarm sounds at a temperature below that which causes problems? Most P4 
chipsets will also automatically throttle the CPU clock if/when overheating 
occurs, so you will be notified by increasing iteration times rather than 
errors.

> I don't much believe in computational results unless they're confirmed
> by several parties.

This attitude is entirely reasonable for long runs given consumer-grade 
hardware.

> BTW, how error-proof is mprime ?

On its own, not particularly. The computational cost of reasonably robust 
self-checking would be too much to bear. However, given that independent 
double checks are run, the _project system_ is pretty good - matching 
residuals mean that the chance of an error getting into the database as a 
result of a computational error is of the order of 1 in 10^20.

_Detected_ errors - roundoff or otherwise - are not a problem. It's the 
undetected ones which are dangerous.

If you have any ideas about how to improve this, I'm sure that George will 
consider them.

There _are_ significant weaknesses in the project - in particular there is a 
_possibility_ that forged double check results could be submitted - that is 
one reason why I'm trying to triple-check all the exponents where both tests 
were run by the same user. Yes, I'm aware that a determined person with a 
working forging formula could bypass that check, too, but we've got to start 
somewhere.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Hyperthreading & ABIT IS7

2004-01-03 Thread Brian J. Beesley
On Saturday 03 January 2004 04:23, Terry S. Arnold wrote:
> I just brought up a new box with P4 3.0 on an ABIT IS& MB. I only appear to
> be getting 50% of the cycles for Prime95. I am running XP Pro SP1.
>
> How do I get the full power available to Prime95?

Errm - are you sure the OS isn't counting cycles in the "virtual" processor 
as well as the real one?

If you're getting reasonable iteration times (similar to those on the 
benchmarks page - well actually they should be a bit better if you're using 
the current version of prime95) then your system is working OK. If you really 
are getting only half the cycles then your iteration time will be about 2x 
those on the benchmarks page.

Alternatively temporarily disable HTT in the BIOS & see how much your 
iteration time changes. Even so you should probably leave HTT on since it 
should improve interactive response when Prime95 is running in the background.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Where is M23494381?

2003-12-26 Thread Brian J. Beesley
On Thursday 25 December 2003 23:22, Ignacio Larrosa Cañestro wrote:
> Where went M23494381?
>
> I has assigned that exponent to factor. But today it dissappears from my
> Individual Account Report. And I don't found it in  the Assigned Exponents
> Report nor in the Cleared Exponents Report  ...

Yes - one of my system "lost" an assignment a couple of days ago - or at 
least it did an automatic checkin & the elapsed time dropped to zero.

I'd just let it carry on running.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Single or Dual channel memory for P4

2003-12-26 Thread Brian J. Beesley
On Friday 26 December 2003 01:36, Jeroen wrote:
> Currently I'm doing 0.078 sec/iteration with my P4-2400 (800FSB)
> I have one single 512 MB PC3200 DDR memory module in my system.
> How much speed increase will I see if I install another memory module so
> that my memory runs in dual channel? I've looked on the benchmark pages on
> mersenne.org but all I could find was just P4-2400, no single or dual
> channel.

If you plan to install dual channel memory please check that the modules are 
_identical_ in timings as well as in size & speed. Otherwise your system will 
either not give best performance or may be unstable.

I have no experience with dual channel PC3200 DDR but my P4-2666 system 
running dual channel PC2100 DDR is significantly quicker than my P4-2533 
system running 1066 MHz RDRAM. On the basis that 1066 MHz RDRAM is noticeably 
quicker than 400 MHz single channel DDR (PC3200) I'd expect that going dual 
channel would give a very significant improvement in speed _for 
mprime/prime95_ probably around 25%. Whether this would reflect in "standard 
benchmarks" using graphics, games etc. is a mystery to me. mprime/prime95 hit 
the memory bus _very_ hard & anything you can do to ramp up the CPU/memory 
bandwidth is definitely not going to hurt!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Re: Large memory pages in Linux

2003-12-24 Thread Brian J. Beesley
On Tuesday 23 December 2003 20:15, Matthias Waldhauer wrote:
>
> Last friday I read some messages about recent kernel modifications and
> patches for version 2.6.0. There is an "imcplicit_large_page" patch,
> allowing applications to use large pages without modifications. I don't
> have the time to dig into it :(

Sure. This is a much better approach than mucking about with 
application-specific modifications which would likely involve serious 
security hazards (leaking kernel priveleges to the application) and/or clash 
with other applications private large-page code and/or large page enabled 
kernels in the future.

The "bad news" with kernel 2.6 is that the (default) jiffy timer resolution 
is changed from 10ms to 1ms, resulting in the task scheduler stealing 10 
times as many cycles. This will likely cause a small but noticeable drop in 
the performance of mprime. Probably ~1% on fast systems. In other words the 
cycles gained by large page efficiency could easily be swallowed up by the 
task scheduler being tuned to improve interactive responsiveness (and cope 
with more processors in a SMP setup). I suppose you could retrofit a 10ms 
jiffy timer to the 2.6 kernel, but then you could just as easily patch large 
page support into a 2.4 kernel & (hopefully) keep the stability of a tried, 
tested & trusted kernel.

Finally, the "good news". Crandall & Pomerance p441 describes the "ping pong" 
variant of the Stockham FFT, in which an extra copy of the data is used but 
the innermost loop runs essentially consecutively through data memory. C&P 
note that contiguous memory access is "important" on vector processors but 
similar memory access techniques are surely the key to avoiding problems with 
TLB architectures _and small processor caches_ - and the largest caches 
present on commercial x86 architecture are indeed small compared with the 
size of the work vectors we use for LL testing. Perhaps implementation along 
these lines could reduce the cache size dependency which seems to affect 
Prime95/mprime - though paying a very large premium for the "extreme" version 
of the Intel Pentium 4 is most certainly not cost effective in view of the 
small performance benefit the extra cache generates, most probably because 
the Prime95/mprime code appears not to be tuned for the P4 Extreme Edition.

Seasonal felicitations
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: Another thought on the L-L Test

2003-12-13 Thread Brian J. Beesley
Hi,

Another thought struck me - this could have useful applications in L-L 
testing programs.

If M is the Mersenne number being tested & R(i) is the L-L residue after i 
iterations, then
R(i+1) = R(i) * R(i) - 2 (modulo M) (by the statement of the L-L algorithm)

But note that (M - R(i))^2 - 2 = M^2 - 2MR(i) + R(i)^2 - 2
so (M-R(i))^2 - 2 (modulo M) is clearly equal to R(i+1).

How can this be of any use? Well, when we have a dubious iteration (say an 
excessive roundoff or sum error) we can check the output by redoing the last 
iteration but starting from (M-R(i)) instead of R(i) - the output should be 
the same. Furthermore the action of calculating M-R(i) is very easy - just 
invert all the bits.

Also, if we have suspicions about the accuracy of code when there is a high 
density of 1 bits, we can try just one iteration but starting at M-4 instead 
of 4. The output residual should be 14 irrespective of M (providing M>7 - as 
will often be the case!). The point here is that, just as the value 4 is 
represented by a string of p bits only one of which is set, M-4 is 
represented by a string of p bits only one of which is unset.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Penultimate Lucas-Lehmer step

2003-12-12 Thread Brian J. Beesley
On Thursday 11 December 2003 15:39, [EMAIL PROTECTED] wrote:
>  Let p > 2 be prime and  Mp = 2^p - 1.
> The familiar Lucas-Lehmer test sets a[0] = 4
> and a[j+1] = a[j]^2 - 2 for j >= 0.
> Mp is prime if and only if a[p-1] == 0 mod Mp.
>
> When Mp is prime, then
>
> a[p-2]^2 == 2 == 2*Mp + 2 = 2^(p+1)  (mod Mp).
>
> Taking square roots, either
>
>a[p-2] ==  2^((p+1)/2) mod Mp
> or
>a[p-2] == -2^((p+1)/2) mod Mp.
>
>
> Around 20 years ago I heard that nobody could predict
> which of these would occur.  For example,
>
>   p = 3a[1] = 4 == 2^2 (mod 7)
>   p = 5a[3] = 194 == 2^3 (mod 31)
>   p = 7a[5] = 1416317954 == -2^4 (mod 127).
>
> Now that 40 Mersenne primes are known, can anyone
> extend this table further?  That will let us test
> heuristics, such as whether both  +- 2^((p+1)/2)
> seem to occur 50% of the time, and
> provide data to support or disprove conjectures.
>
This is dependent on using the Lucas sequence starting at 4. In practice 
there are a large number of other starting values which could be used - in 
fact, 2^(p-2) of them. AFAIK we happen to use 4 because it is a "nice small 
number" which works for all values of p > 2 - whereas most of the other 
values which work for p don't neccessarily work for q != p.

For instance, with p=3 we could use starting value 3 instead of 4

3^2 - 2 = 7 is congruent to 0 modulo 2^3-1
4^2 - 2 = 14 is congruent to 0 modulo 2^3-1

But other values don't work:

0^2 - 2 = -2 is congruent to 5 modulo 2^3-1
1^2 - 2 = -1 is congruent to 6 modulo 2^3-1
2^2 - 2 = 2 is congruent to 2 modulo 2^3-1
5^2 - 2 = 23 is congruent to 2 modulo 2^3-1
6^2 - 2 = 34 is congruent to 6 modulo 2^3-1

Obviously enough, if k is a quadratic residue modulo n. then so is n-k

(n-k)^2 mod n = n^2 -2kn +k^2 mod n = k^2 mod n

So, in the penultimate step, it _doesn't matter_ whether the actual residue 
is 2^((p+1)/2) or -2^((p+1)/2) - if running one iteration from 2^((p+1)/2) 
doesn't give residue 0, then neither can running one interation from 
-2((p+1)/2), and vice versa.

So simply testing whether 2^((p+1)/2)+2 is a quadratic residue modulo 2^p-1 
_might_ (in principle) be helpful.

Look in particular at p=11. 2^6+2 = 66 (not 68 as misprinted in my previous 
message) appears not to be a quadratic residue modulo 2^11-1 = 2047 and sure 
enough 2^11-1 is composite. 

Interestingly there are _two_ distinct solutions to x^2 mod 2^23-1 = 2^12+2: 
x=+/-2339992 & x=+/-3053916. This suggests that the number of starting values 
for a "successful" L-L test might _exceed_ 2^(p-2) by a factor of _at least_ 
2 i.e. every possible starting value would have to reach 0 after p-2 
iterations - which is clearly absurd.

So perhaps the criterion should be that there is only one _distinct_ solution 
to sqrt (2^(p+1)/2)+2) modulo 2^p-1.

Anyhow if we "play safe" we simply find that, in the case p=23, 4098 is a 
quadratic residue mod 8388607, so we have to run a LL test. Unless we happen 
to notice that 23 is a 3 mod 4 Sophie-Germain prime so 8388607 is divisible 
by 47

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: Possible refinement of screening for Mersenne primes

2003-12-11 Thread Brian J. Beesley
Hi,

I was thinking about the possible reversibility of the Lucas Lehmer 
algorithm. In particular, for any odd number n > 1, 

(2^((n+1)/2))^2 is congruent to 2 modulo 2^n-1

i.e. 2 is a quadratic residue modulo 2^n-1. 

This is not helpful in itself as (a) there are other integer solutions to  
sqrt(x + k.2^n-1) = 2, (b) it does not distinguish in any way between prime 
and composite Mersenne numbers.

However, considering the next-to-the-last iteration appears to be 
interesting. If x is a solution to (x^2 - 2) mod (2^n-1) = 2^((n+1)/2)
then starting from residue x and performing 2 iterations will result in 
residue 0. For small n>3 (n=3 does not work because there is only one 
iteration to do in the L-L test!) we have:

n = 5: x^2-2 mod 31 = 8; x^2 mod 31 = 10; x = 14, x = 17
n = 7: x^2-2 mod 127 = 16; x^2 mod 127 = 18; x = 48, x = 79
n = 9: x^2-2 mod 511 = 32; x^2 mod 511 = 34; no solutions
n = 11: x^2 - 2 mod 2047 = 64; x^2 mod 2047 = 68; no solutions
n = 13; x^2 - 2 mod 8191 = 128; x^2 mod 8191 = 130; x = 3470, x = 4721
n = 15; x^2 - 2 mod 32767 = 256; x^2 mod 32767 = 258; no solutions
n = 17; x^2 - 2 mod 131071 = 512; x^2 mod 131071 = 514; x = 19647, x = 111424
n = 19; x^2 mod 524287 = 1026; x = 199279, x = 325008
n = 21; x^2 mod 2^21-1 = 2050; no solutions
n = 23; x^2 mod 2^23-1 = 4098; x = 2339992, x = 3053916, x = 5334691, x = 
6048615

In other words, it looks as if when there are no solutions to x^2 mod 2^n-1 = 
2^((n+1)/2) + 2, then 2^n-1 is not prime, although the converse is not 
neccessarily true.

(1) Could someone with the required background please tidy up my logic and 
prove that the assertion above is true i.e. there is no prime 2^p-1 with p > 
3 such that there are solutions to x^2 mod 2^p-1 = 2^((p+1)/2) + 2

If so, then we have a "one-step" test which would allow us to eliminate some 
- possibly many - Mersenne prime candidates without even bothering to look 
for small factors.

(2) Can it be demonstrated that the search for solutions of x^2 mod 2^p-1 = 
2^((p+1)/2) + 2 - or at least the search for _existence_ of solutions (we 
wouldn't need the actual numerical values) - might be faster than executing 
the LL test? The method I used for small n above was just to step through 
values of k calculating k^2 mod 2^n-1, which is clearly exceedingly 
_in_efficient for large n!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: Generalized Mersenne Numbers

2003-11-23 Thread Brian J. Beesley
Congratulations on the (unverified) discovery of the 40th Mersenne Prime.

I was thinking (always dangerous!) about generalizing Mersenne numbers. The 
obvious generalization a^n-1 is uninteresting because they're all composite 
whenever a>2 and n>1. However there is an interesting generalization:

Define GM(a,b) = a^b-(a-1), so GM(2,b) = M(b); also GM(a,1) = 1 for all a

The distribution of primes amongst GM(a,b) for small a > 2 and small b does 
seem to be interesting - some values of a seem to yield a "richer" sequence 
of primes than others. Note also that, in this generalization, some 
_composite_ exponents can yield primes.

Another interesting point: the "generalized Mersenne numbers" seem to be 
relatively rich in numbers with a square in their factorizations - whereas 
Mersenne numbers proper are thought to be square free. (Or is that just 
Mersenne numbers with prime exponents?)

A few interesting questions:

(a) Is there a table of status of "generalized Mersenne numbers" anywhere?

(b) Is there a method of devising Lucas sequences which could be used to test 
GM(a,b) for primality reasonably efficiently?

(c) Are there any values of a which result in all GM(a,b) being composite for 
b>1? (There are certainly some a which result in the first few terms in the 
sequence being composite e.g. GM(5,2) = 21, GM(5,3) = 121 & GM(5,4) = 621 are 
all composite - but GM(5,5) = 3121 is prime).

(d) Is there any sort of argument (handwaving will do at this stage) which 
suggests whether or not the number of primes in the sequence GM(a,n) (n>1) is 
finite or infinite when a > 2?

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Speeding Up The Speediest Yet

2003-07-12 Thread Brian J. Beesley
On Saturday 12 July 2003 13:08, Scott Gibbs wrote:
> Dear Base:
>
> By a twist of extraordinary luck I procured a 3GHz. P IV with 1 GByte of
> RAM which translates to 12 possible 1 million candidate tests per year.  
> But I found a way to accelerate this behemoth even more!
>
>
> By installing the www.memokit.com memory optimizer and setting the priority
> of PRIME95 to REALTIME, I brought the .081 spec down to .063 at the very
> top of the Benchmark list.

1) Assuming your system is otherwise idle, changing the priority should have 
zero effect.

2) Again assuming your system doesn't run loads of other stuff, 128 MBytes is 
more than sufficient for running LL tests in Prime95. More is useful for P-1 
stage 2 and ECM, depending on the exponent.

3) I've no idea what the "memory optimizer" does, but any bigger change than 
a few percent is most unlikely - unless you do something like overrrunning 
the memory clock?

BTW I'm getting 0.084 benchmark for 1792K FFT run length on a 2.66 GHz P4 
using dual-channel PC2100 DDR memory, e7205 chipset. 0.063 sounds about right 
for a 3GHz system using dual-channel PC3200 DDR.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Squaring huge numbers

2003-06-29 Thread Brian J. Beesley
On Sunday 29 June 2003 05:42, Pierre Abbat wrote:
> I am investigating 64-100 sequences, which are chains of bases such that
> the number written 100 in each is written 64 in the next (e.g. 8,10,16,42).
> I quickly wrote a Python program to compute them. It is now computing the
> square of a 1555000 digit number and has been doing so since the 17th. What
> squaring algorithm is Python using? 

I don't know offhand, but:

(a) Python is interpreted;

(b) the Python interpreter is open source, so it shouldn't be hard to find 
out how it does big integer arithmetic;

(c) you might also look for a dependence on the GNU multi-precision library 
(GMP) since that is an obvious candidate;

(d) whatever method it's using doesn't seem to be very efficient - you have 
been running for 10 days to execute something which mprime/Prime95 would 
accomplish in a small fraction of a second.

> Is there a faster way of squaring
> numbers where the number of digits doubles at each iteration?

There is x^2 code in the published mprime/Prime95 source. To do what you 
require would obviously require you to hack together something to load the 
initial work vectors & read the result back out when you've finished. Also 
you would need to start with twice the FFT run length you would require for 
the multiplication modulo 2^p-1 (so there is room for the double-length 
result) & double the run length again for each squaring. But it looks more 
like lots of work than being difficult.

An alternative approach, cobble something together using GMP, which is 
reasonably efficient for general work, though not blindingly so.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Double Checking

2003-06-28 Thread Brian J. Beesley
On Saturday 28 June 2003 18:47, you wrote:
> Will the 64-bit residue be the SAME when a given exponent
> was originally Lucas-Lehmer tested with a 384K FFT, but
> the double-check is performed using a 448K FFT ?

Hopefully - in fact the whole 2^p-1 bit residue R(p) should be the same!

R(2)=4
R(n+1) = R(n)^2 -2 modulo 2^p-1

I don't see how the FFT run length should affect this ... in fact the FFT is 
only used at all because it's the most efficient method known of doing the 
multiplication of very large numbers.

If the residues calculated with 384K FFT & 448K FFT are different, then:

- most likely, at least one of the runs has been affected by a "glitch" of 
some sort;

- or, the 384K FFT run length is too small & some data value was rounded to 
the wrong integer during the run - I do not think that this is very likely 
unless serious abuse has been made of the SoftCrossoverAdjust parameter;

- or, there is a systematic error in the program code for at least one of the 
run lengths. Since short runs (400 or 1000 iterations) have been crosschecked 
with various FFT run lengths, again I do not think this is very likely.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: M#40 - what went wrong?

2003-06-16 Thread Brian J. Beesley
On Monday 16 June 2003 20:16, George Woltman wrote:
>
> I'm also adding code to 23.5 to check EVERY iteration for an impossible
> result such as -2, -1, 0, 1, 2.  This test will be very, very quick.

Sounds sensible to me ... but, does it not make sense to run this test during 
those iterations when testing for excess roundoff error occurs, i.e. normally 
only 1 in 128 iterations? The point here is that, once the residual gets into 
a loop like this, it won't get out again.
>
> FYI, six times a result of 0002 has been reported to the
> server.  So, somehow or another it is possible for a hardware error to
> zero the FFT data without triggering an ILLEGAL SUMOUT.
>
Any instances of FFFE? Should be about as common, if this is a 
hardware related problem.

There are lots of reasons why memory corruption may occur but, in most cases, 
it is hard to see how a whole block of data should be filled with zero (or 
one) bits without corrupting the program code in such a way that the program 
would crash or have the operating system crash from under it. I think the 
most likely scenario would be that a pointer to the FFT work space could be 
corrupted & point to virtual memory which has "zero on demand" attribute. 
This might be detectable by memory leak even on systems without proper memory 
protection (Win 9x) but could be fixed easily enough by keeping _two_ 
pointers to critical work space (the values don't change that often...) & 
comparing them occasionally. As to why the pointer might get corrupted, most 
likely we're looking at stack overflow or some other program behaving badly 
rather than a bug internal to Prime95.

It would be interesting (though probably impossible) to check which OS the 
"residue 2" runs were run on. If my logic is right then I would suspect that 
they were all run on Win 9x/ME rather than NT, W2K, XP or any of the 
varieties of linux, where proper memory protection should give much better 
protection against this sort of problem.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Re: M#40 verification run

2003-06-12 Thread Brian J. Beesley
On Thursday 12 June 2003 10:07, Nathan Russell wrote:
>
> That is a collosal understatement.  Every modulo operation destroys
> information, and I'm not sure whether one COULD construct such a file.

Indeed.

In general there will be more than one x such that x^2-2 = R modulo 2^p-1 so, 
working backwards through a number of steps, you would have only a very small 
probability of deriving the same "starting condition".

Even if the equation was easy to solve in reverse...

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: mersenne prime +2

2003-04-06 Thread Brian J. Beesley
On Saturday 05 April 2003 20:33, Alexander Kruppa wrote:
> Bjoern Hoffmann wrote:
> > Hi,
> >
> > I wondered if someone already have checked if the last mersenne
> > numbers +2 are double primes?
> >
> > like 3+5, 5+7, 9+11, 11+13 or
> >
> > 824 633 702 441
> > and
> > 824 633 702 443
> >
> > regards
> > Bjoern
>
> Mp + 2 is divisible by 3 for odd p and thus cannot be prime.
>
> Mp - 2 however can, in theory, be prime and form a twin prime with a
> Mersenne prime. A list of the status of Mp - 2 for known Mersenne primes
> can be found on Will Edgington's page,
> http://www.garlic.com/~wedgingt/mersenne.html
>
> Try the M3status.txt link right at the top.
>
> As you see most Mp - 2 have known factors, some others have failed
> probable primality tests.
>
> However you will notice that for the present record prime, M13466917, no
> status is listed for M13466917 - 2. This is because no factors are
> known, nor has a primality test been done yet. I have searched for
> factors in vain up to almost 10^13 and am planning to do a primality
> test, but I'm still not determined which program to use for optimal speed.

I would think that running Miller's Test (for strong pseudoprimes) would be 
worthwhile... this _should_ take about the same time as a Lucas-Lehmer test 
on the associated Mersenne number, but there may be a problem with fast 
calculation modulo (2^p-3).

It might be possible to modify PRP (Woltman) and/or Proth (Gallot) to perform 
this test without an enormous amount of effort. Even without a shortcut for 
modulo (2^p-3) working, the run time should be "reasonable" on a fast PC 
system. Proth may also give some clues about constructing a Lucas sequence to 
perform a proper primality test, though the run time is likely to be a lot 
longer than a Fermat/Miller pseudoprime test & isn't worth the effort of 
starting unless the number is found to be a probable prime.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: mersenne prime +2

2003-04-05 Thread Brian J. Beesley
Hi,

The _only_ incidence of 2^p-1 & 2^p+1 both being prime is p=2 yielding the 
prime pair (3, 5).

Here's a proof by induction:

Consider the difference between the second successor of two consecutive 
Mersenne numbers with odd exponents:

(2^(n+2)+1) - (2^n+1) = 2^(n+2) - 2^n = 2^n * (2^2 - 1) = 2^n * (4 - 1)

which is clearly divisible by 3.

Now 2^1 + 1 = 3 is divisible by 3, therefore 2^p+1 is divisible by 3 for 
_every_ odd p (irrespective of whether or not 2^p-1 is a Mersenne prime).

However I believe there is at least one known Mersenne prime 2^p-1 for which 
it is not known whether (2^p+1)/3 is prime or composite.

Regards
Brian Beesley

On Saturday 05 April 2003 18:47, Bjoern Hoffmann wrote:
> Hi,
>
> I wondered if someone already have checked if the last mersenne
> numbers +2 are double primes?
>
> like 3+5, 5+7, 9+11, 11+13 or

9 = 3^2 (well, usually)

>
> 824 633 702 441
> and
> 824 633 702 443
>
> regards
> Bjoern
>
> _
> Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
> Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: mprime and primenet

2003-04-01 Thread Brian J. Beesley
On Tuesday 01 April 2003 07:11, John R Pierce wrote:
> I just started running a recent build of mprime on a couple of linux
> systems, and noted an anomaly vis a vis primenet...
>
> When mprime connects to primenet, its not updating date on the rest of the
> worktodo, only on the exponent actually in progress.
>
> case in point...
>
>
> 18665107  67 13.6   7.4  67.4  26-Mar-03 12:48  18-Mar-03 17:20  xeon1b
> 2790 v19/v20
> 18665149  66 13.6  17.4  74.4   18-Mar-03 17:20  xeon1b
> 2790 v19/v20


I guess what's happenning is that mprime checks in a single assignment after 
completing the P-1 factoring. The default config these days leaves factoring 
until the assignment reaches the top of the heap.

mprime -c only forces connection to server if prime.spl exists (there are 
unreported results!) or if there is now less work remaining than your 
specified value. If you want to force updating completion dates, run 
mprime -m & use the "manual communication" menu (item 12 I think).

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: servers down completely?

2003-03-25 Thread Brian J. Beesley
Hi,

There still seems to be a problem of some sort. This morning (between 0800 & 
0900 GMT) I was able to get some results checked in but since then I'm 
getting "server unavailable" from the client.
> >
> > traceroute shows a nice loop:
>
This is typical of a problem at a network leaf node. The site router has a 
default route which points down the link to the network as a whole; when the 
"leaf" (target host) is online it generates a local route which overrides the 
default for the address(es) used. Meanwhile the network node to which the 
site router connects has a route for the entire site. So a packet with the 
leaf's address bounces between site & the adjacent network node routers until 
it times out.

The _correct_ configuration at the site is to specify a static default route 
for the whole site's address which points to a null interface so that 
undeliverable packets are instantly dropped.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Hyperthreading, once again

2003-03-23 Thread Brian J. Beesley
On Sunday 23 March 2003 02:50, Steinar H. Gunderson wrote:

> But then somebody said each HT `virtual CPU' had
> their own part of the bus, so it would definitely help with I/O bound (RAM
> I/O, of course, not disk I/O) programs as well... Could this be true, or is
> this just misinformation?

I think perhaps someone is getting confused; memory access is 
chipset-dependent not CPU-dependent. The "Granite Bay" chipsets support 
interleaved DDR access which doubles the effective bandwidth.

On Sunday 23 March 2003 02:13, John R Pierce wrote:
>
> the newest xeons have 533Mhz bus, which is supported by chipsets like the
> E7501.   I started running 4 instances of mprime on a pair of dual 2.8Ghz
> Xeons, but had to wipe them a few days later and forgot to save the
> work-in-progress...  Monday I'll restart them and note how fast 1 and 2
> instances run with and without hyperthreading enabled.   IIRC, they thought
> they'd finish 18,xxx,xxx exponents in 10 days.

My 2.66 GHz P4/Asus P4G8X system (e7502 chipset) is running exponent 18600979 
(1024K run length) at 0.040 sec/iter giving a total run time of ~8.5 days. 
Though it uses a "Granite Bay" chipset, this mobo supports "consumer" S478 P4 
CPUs. I'm using a "Northwood" 2.66 GHz processor (which doesn't support HT, 
though the chipset does) because this seems to be optimum grunt/$ at present.
>
> Note re: the memory contention issue, the dual xeon chipsets like the e7501
> have higher memory bandwidth as they use interleaved DDR (2 banks), this
> may at least partially shift the performance vis-a-vis two seperate p4
> systems.

Possibly, but dual-bank DDR on a uniprocessor system is better still - puts 
P4 DDR systems into the same league as systems supporting (expensive) PC1066 
RDRAM, maybe even a few percent ahead though using only PC2100 DDR ("266 MHz" 
actually 133 MHz dual-pumped).

> OTOH, dual xeon e7501 systems are not cheap.   The ones I built for work
> were $3300 each with dual 2.8Ghz and 2GB ram, but without hard drives,
> these are 2U rackmount servers using Intel's SE7501WV2 motherboard and a
> Intel SE2300 rack chassis.  They are also *extremely* noisy (seems to be a
> feature of all dual xeon 2U rack servers as they need massive cooling for
> the CPUs and 6 hotswap SCSI drives).

The availability of consumer mobos with "Granite Bay" chipsets makes 
Xeon-based systems look _very_ expensive for the CPU power you get from them. 
Effectively the only performance advantage from the Xeon is the larger L2 
cache - memory contention issues will totally undermine this so far as we're 
concerned. The benefit of Xeon server systems is power density - useful if 
you want to put a large bundle of them in a small area. But shifting all that 
heat from a small case really is going to require a lot of airflow, hence the 
noise. In a 2U rackmount case there's not much height for a heatsink & fan, 
therefore small components have to be driven fast. Even then the airflow from 
a rackfull of servers is _warm_ - sufficiently so to be useful as e.g. a 
hairdryer - you're going to need aircon to dump the excess heat to the 
outside world.

Summary - anyone self-building systems to run Prime95/mprime at home is 
_almost certainly_ going to get far more CPU power per dollar (purchase 
price; electricity costs will be similar) from 2 x P4 systems using "Granite 
Bay" chipset than from 1 x dual Xeon system with the same speed CPU.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Overheating!

2003-03-19 Thread Brian J. Beesley
On Wednesday 19 March 2003 00:10, Elias Daher wrote:
> Thanks all, actually I was thinking of opening the case, (but too lazy to
> do it...) and the weird thing now is that the temperature is stable with
> Prime95 running with other applications, and it's not reaching the level it
> was reaching two days ago and I'm still testing the same number... It's

Odd. I wonder if a fan stalled. Possibly even the case fan, if you have only 
one fitted.

> still hot though, the CPU is running at 65°C and the board at 55°C...
> Anyway, the P4 is tough, it can handle it! (Once the CPU temp reached 85°C
> for more than 10 minutes cause it was running without a fan!!!)

AFAIK the Intel spec for thermal throttling is 75C.

However the P4's I'm running are all reasonably cool:

P4 1.8A, retail box HSF, 50C
P4 2.53B, Zalman CuAl HSF, <2000 rpm, 44C
P4 2.66B, Zalman CuAl HSF, <2000 rpm, 45C

Zalman HSF comes with a rheostat which allows you to vary the fan speed from 
(approx) 1500 - 3000 rpm - below 2000 rpm it's highly unlikely that you will 
be able to hear the fan, so this HSF is quiet as well as effective. The only 
change I made to Zalman's installation instructions was to use Arctic Silver 
thermal compund instead of the small tube of gloop included in the Zalman 
kit. Note, I'm not saying Zalman's gloop is useless, but Arctic Silver is 
usually a couple of degrees cooler than other stuff.

A decently ventilated case helps, too ...

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Overheating!

2003-03-18 Thread Brian J. Beesley
On Monday 17 March 2003 21:20, you wrote:
> Hi
> I had let my computer work on factoring for a while, and I switched back to
> LL testing two days ago, and last night my computer was beeping all the
> time because the cpu and the board were overheating... There should be no
> problem with my hardware, I got good fans that I bought 3 months ago, and
> everything was fine until yesterday... I have a P4 1.5 Ghz... There's
> something weird I noticed but I don't know if it's a factor and it is that
> SiSoft Sandra indicates the voltage sensor is detecting -8.7V and -3V for
> the -12 V and -5 V respectively...

Weird. Did you try a different version of Sandra? I suspect the version 
you're using doesn't understand the chipset properly.

> Does anyone have an idea? (sure! ;-) ) Does version 23.2.1 of prime95
> (which I'm using) use the cpu more efficiently (and overheating it as a
> consequence) than the previous versions which weren't causing any problems
> on my current configuration?
>
I doubt Prime95 version has much to do with it. The P4's I have are all 
Northwood processors (1.8A, 2.53 & 2.66); the slower two have been running 
since v21 & I haven't seen any significant jumps in CPU temp with version 
(note, I use mprime rather than Prime95).

It's certainly true that running LL tests (or DC assignments) will drive the 
chip hotter than running factoring, because the parts of the chip involved in 
SSE2 will be fully active instead of more or less idle.

Other things which could be a significant factor:

(a) chipset setup - usually you can mess around with fan speeds and/or 
thermal throttling to change the balance between speed/power consumption and 
noise/battery life - particularly battery life of notebooks. Note, sometimes 
this setup can be affected by power saving parameters settable from within 
the running operating system as well as through the BIOS at system boot time.

(b) systems which have been running for a while often accumulate dust, this 
can get stuck in between heatsink fins & reduce efficiency of cooling.

However, fans do sometimes fail (even under 3 months age); my guess is that 
your CPU cooler fan has deceased. The thermal control circuitry embedded in 
the P4 does at least keep the system limping along with a failed CPU cooler 
fan. If the heatsink itself is halfway decent, and the case is properly 
ventilated, overheating could well fail to occur unless something heavy like 
LL testing is running.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: p4 xeons...

2003-03-16 Thread Brian J. Beesley
On Saturday 15 March 2003 01:07, John R Pierce wrote:
>
> another minor question...  Is there any way to force CPU affinity, or does
> mprime do that automatically?

Unlike Windows, linux has a smart CPU/task allocation algorithm that tries 
hard (but not too hard) to run a thread on the same CPU it used last. So 
there's no need to force CPU affinity, in fact this might well damage 
throughput.

As for hyperthreading - I believe the development kernel (2.5.x) has support 
for hyperthreading. You will almost certainly need to build your own custom 
kernel to obtain this support.

The best sources of information on task allocation on a multiprocessor linux 
system are, not surprisingly, the kernel documentation and the source code 
itself.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: P-1 on PIII or P4?

2003-03-10 Thread Brian J. Beesley
On Monday 10 March 2003 07:49, Daran wrote:
>
> B1 and B2 are supposed to be chosen by the client so that the cost/benefit
> ratio is optimal.  Does this mean that P4s is choose B2 values which are
> too high?  Or does everything else choose values too low?  Or is there some
> reason I can't think of, why higher values might be appropriate for a P4?

George?
>
> In fact, I'm not even sure it does get a higher B2 - the apparent
> difference could be, as Brian suggested, due to differences between
> versions.  I don't have access to a P4, so I can do any testing, But I'd
> appreciate it if you or someone else could try starting a P-1 on the same
> exponent (not in one of the ranges where it would get a different FFT
> length) on two different machines, with the same memory allowed.  You would
> not need to complete the runs.  You could abort the tests as soon as
> they've reported their chosen limits.

I just tried Test=8907359,64,0 on two systems - an Athlon XP 1700+ and a 
P4-2533, both running mprime v23.2 with 384 MB memory configured (out of 512 
MB total in the system). These were fresh installations, I did nothing apart 
from adding SelfTest448Passed=1 to local.ini to save running the selftest.

The Athlon system picked B1=105000, B2=1995000 whilst the P4 picked 
B1=105000, B2=2126250. So it seems that P4 is picking a significantly but not 
grossly higher B2 value.

Yes, I checked, both systems are using 448K run length for this exponent 
(though it's only just under the P4 crossover).

Regards
Brian Beesley
>
> > Would I unreserve all the exponents that are already P-1 complete?
> > If I don't change the DoubleCheck into Pfactor then couldn't I just let
> > the exponent run and then sometime after P-1 is done move the entry and
> > the 2 tmp files over to another machine to finish it off?
>
> If you're going to feed your other machines from this one, then obviously
> you won't need to unreserve the exponents they need.  But there's an easier
> way to do this.  Put SequentialWorkToDo=0 in prime.ini, then, so long as it
> never runs out of P-1 work to do, it will never start a first-time or
> doublecheck LL, and there will be no temporary files to move.  I also
> suggest putting SkipTrialFactoring=1 in prime.ini.
>
> > That sounds like more work than I care to do...
>
> I agree that with 20 boxes, the work would be onerous.
>
> > ...I can see having 1 machine
> > do P-1 on lots of double-checks.
>
> That would be well worth it.  Since one box will *easily* feed the other
> twenty or so, you will have to decide whether to unreserve the exponents
> you P-1 beyond your needs, or occasionally let that box test (or start
> testing) one.
>
> You may find a better match between your rate of production of P-1 complete
> exponents, and your rate of consumption, if you do first-time testing.
>
> [...]
>
> > As an mprime user I edit the local.ini file all the time.  Per your notes
> > I upped *Memory to 466.
>
> That will certainly help exponents below 9071000 on a P3, or 8908000 on a
> P4. The current DC level is now over 917, so I doubt this will help
> much, (though of course, it won't harm, either).  I haven't tried.  I'm
> still getting enough sub 9071000 expiries.
>
> > --
> >   [EMAIL PROTECTED] - HMC UNIX Systems Manager
>
> Daran G.
> _
> Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
> Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Optimal choice of E in P-1 computations

2003-03-09 Thread Brian J. Beesley
On Sunday 09 March 2003 12:24, Daran wrote:

> In the hope of more quickly collecting data, I have also redone, to 'first
> time test' limits, every entry in pminus1.txt which had previously done to
> B1=B2=1000, 2000, and 3000.  For these exponents, all in the 1M-3M ranges,
> the client was able to choose a plan with E=12.  Unfortunately, I found far
> fewer factors in either stage 1 or stage 2 than I would expect, which
> suggests to me that exponents in this range have had additional factoring
> work (possibly ECM) not recorded in the file.

1) What about factors which would be found with your P-1 limits but happened 
to fall out in trial factoring? (In fact a lot of the smaller exponents - 
completed before P-1 was incorporated in the client - seem to have been trial 
factored beyond the "ecomonic" depth.) In any case, if you're using very 
small values of B1 & B2, I would _expect_ that a very high percentage of the 
accessible factors will be found during "normal" trial factoring.

2) It would not surprise me at all to find that there is a substantial amount 
of P-1 work being done which is not recorded in the database file. I've also 
had "very bad luck" when extending P-1 beyond limits recorded in the database 
file for exponents under 1 million. Eventually I gave up.

3) ECM stage 2 for exponents over 1 million takes a serious amount of memory 
(many times what P-1 can usefully employ), whilst running ECM stage 1 only is 
not very efficient at finding factors - lots of the power of ECM comes from 
the fact that stage 2 is very efficient (assuming you can find memory!)

> Of particular concern is the
> possibility that in addition to reducing the number of factors available
> for me to find, it may have upset the balance between 'normal' and
> 'extended' P-1 factors - the very ratio I am trying to measure. 

One way to deal with this would be to deliberately forget previously reported 
work, i.e. take _all_ the prime exponents in the range you're interested in, 
trial factor to taste then run P-1. This way you can be sure that, though the 
vast majority of the factors you will find are rediscoveries, the 
distribution of the factors you find is not distorted by unreported negative 
results.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: please recommend a machine

2003-03-08 Thread Brian J. Beesley
On Saturday 08 March 2003 03:35, spike66 wrote:
> Some of you hardware jockeys please give me a
> clue.  I have two machines at home running GIMPS 24-7.
> One is a P4-2Ghz.  The other is a 5 yr old 350 Mhz
> PII, which is in need of a tech refresh.  Clearly
> there is more to computer performance than clock
> speed, but for GIMPS I suppose clock speed is
> everything.  Is it?  My other machine already has
> a DVD writer, networked etc, so I need not rebuy
> that.  What should I buy?  I have no hard spending
> limit, but I am looking for value and suppose that
> a thousand dollar machine would be more than
> adequate.  Does AMD vs Intel matter?  Does bus
> speed matter?

1) In this position you are _far_ better off building your own system. It's 
interesting in itself, offers a cash saving when you can delete unwanted 
parts or recycle old peripherals and is the only way you can guarantee to get 
optimised performance. Medium or large systems builders catering for the 
retail or direct sales market will almost always not be able tell you the 
specification of important parts of the system - in fact just about all they 
will be able to tell you is processor type, speed & HDD size. This is like 
choosing an automobile on the basis of number of cylinders, top speed & 
number of seats - usually there are other factors you might want to consider.

2) If you want to use the system for "general purpose" tasks then there is 
something to be said for AMD systems. But, because of the efficiency of SSE2 
code in Prime95/mprime, a P4 system is _much_ better value for money if 
that's what you're intending to use the system for.

3) Whether you go for AMD or Athlon, avoid the top two or three CPU speeds. 
You pay increasingly large amounts of money for relatively small increases in 
speed. In any case, if you're building a system which is otherwise "state of 
the art", you will be able to upgrade the processor in one year to a 
processor chip faster than today's "top of the range", keep the old one and 
still have money in your pocket. (The old processor chip can of course be 
sold on eBay).

4) The chipset for P4 systems is in a state of flux at the moment. There are 
several available but only two worth considering: i850e and e7205. The 
differences here are substantial e.g. i850e supports 533 MHz RDRAM (PC1066) 
whereare the e7205 supports dual-channel DDRAM. Actually the theoretical 
memory bandwidth are the same - but DDR is much cheaper & easier to obtain. 
Also the e7205 chipset, and only the e7205 chipset, supports hyperthreading.

Systems using single-channel DDRAM memory will be considerably slower with 
the same clock speed - probably 10-15%. The way I look at it, the ~$200 
required to buy a 10% faster processor is better spent on a more efficient 
memory subsystem. Same applies with RDRAM - using "cheap" PC800 RDRAM in a 
system which supports PC1066 is a very bad compromise.

The last two systems I've built have been as follows:

P4-2533 / Asus P4T533-C / 4 x 128MB PC1066 RDRAM
P4-2666 / Asus P4G8X / 2 x 256MB PC2100 DDRAM

Neither of these mobos are cheap, but bear in mind that the P4G8X has just 
about everything you might need on board, except the graphics adapter. (6 
USBv2, 2 Firewire, 6 channel audio, gigabit LAN as well as the still-standard 
PS/2, serial & parallel ports). It also supports RAID but only on the two 
serial ATA ports it boasts in addition to the two standard IDE ports.

The combination I used for the P4-266 system should come in _well_ under 
$1000. You could probably recycle the old peripherials (monitor, kb, mouse, 
floppy, CD, hard disk) from your old P2-350 unless you really feel like 
shelling out. 

You _might_ be able to recycle the old case as well - however you will 
probably need to replace the PSU with a new one in order to supply the power 
requirements of a P4 system. Look for PSUs rated over 300W with dual fans - I 
particularly reccomend the Enermax PSU with rheostat fan speed control 
because it's quiet & effective, though certainly not cheap.

In any event it would be worthwhile considering replacing the case with a 
new, top-end version in order to get decent cooling without having to have 
noisy fans. The Coolermaster ATC-200/201 cases are very well built, elegant 
and have 4 quiet fans - cooler and much quieter than one "standard" one. 
Thermaltake Xaser cases are cool and also feature multiple fans with a speed 
controller; they're significantly cheaper and undoubtedly adequate but much 
"tinnier" in build quality.

You should also consider the Zalman CPU cooler instead of the retail Intel 
unit. It's very effective and very quiet (except when turned up to maximum, 
which shouldn't be neccessary!) Adding a Coolermaster case & Zalman fan to 
the suggested P4-266/P4G8X system will get you close to the $1000 mark.

The last wrinkle here is that you will probably _not_ be able to recycle the 
graphics card from your old system. All decent P4 mobos _require_ an AGP 
g

Re: Mersenne: P-1 on PIII or P4?

2003-03-06 Thread Brian J. Beesley
On Thursday 06 March 2003 13:03, Daran wrote:
>
> Based upon what I know of the algorithms involved, it *ought* to be the
> case that you should do any P-1 work on the machine which can give it the
> most memory, irrespective of processor type.

... assuming the OS allows a single process to grab the amount of memory 
configured in mprime/Prime95 (this may not always be the case, at any rate 
under linux, even if adequate physical memory is installed.)
>
> However, some time ago, I was given some information on the actual P-1
> bounds chosen for exponents of various sizes, running on systems of various
> processor/memory configurations.  It turns out that P4s choose *much
> deeper* P-1 bounds than do other processors.  For example:
>
> 8233409,63,0,Robreid,done,,4,45,,Athlon,1.0/1.3,90
> 8234243,63,0,Robreid,done,,4,45,,Celeron,540,80
> 8234257,63,0,Robreid,done,,45000,742500,,P4,1.4,100
>
> The last figure is the amount of available memory.  The differences between
> 80MB and 100MB, and between 8233409 and 8234257 are too small to account
> for the near doubling in the B2 bound in the case of a P4.

Yes, that does seem odd. I take it the software version is the same?

The only thing that I can think of is that the stage 2 storage space for 
temporaries is critical for exponents around this size such that having 90 
MBytes instead of 100 MBytes results in a reduced number of temporaries, 
therefore a slower stage 2 "iteration time", therefore a significantly lower 
B2 limit.

I note also that the limits being used are typical of DC assignments. For 
exponents a bit smaller than this, using a P3 with memory configured at 320 
MBytes (also no OS restriction & plenty of physical memory to support it) but 
requesting "first test" limits (Pfactor=,,0) I'm getting B2 
~ 20 B1 e.g.

[Thu Mar 06 12:07:46 2003]
UID: beejaybee/Simon1, M7479491 completed P-1, B1=9, B2=1732500, E=4, 
WY1: C198EE63

The balance between stage 1 and stage 2 should not really depend on the 
limits chosen since the number of temporaries required is going to be 
independent of the limit, at any rate above an unrealistically small value.

Why am I bothering about this exponent? Well, both LL & DC are attributed to 
the same user... not really a problem, but somehow it feels better to either 
find a factor or have an independent triple-check when this happens!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: exponent reported not prime in 'results.txt', but not communicated to primenet.

2003-02-25 Thread Brian J. Beesley
Very strange ...

(1) this could be a problem with the alternate directory logic, which I've 
never tested - on my dual cpu systems I've just installed twice in seperate 
directories. On linux this wastes no space since you only need one copy of 
the executable - by default the program uses workfiles in whichever directory 
is current when the program is started. Since this includes local.ini - with 
the Pid line - you can start as many copies of mprime as you like, provided 
they're in seperate directories - whilst this is useful for testing, there is 
clearly an efficiency issue if you have more mprimes running than you have 
CPUs!

(2) there might possibly be an issue if there is a problem writing prime.spl 
for some reason e.g. the filesystem is temporarily full / user quota exceeded 
or some similar problem. Try digging in the logfiles, something interesting 
might turn up.

I always run with "InterimFiles=100" in prime.ini. If I got a problem 
like that I would put the assignment back in worktodo.ini & use the savefile 
from the last million iteration checkpoint (unless I had a later Pnnn 
file from a daily backup).

Regards
Brian Beesley

On Tuesday 25 February 2003 21:48, george de fockert wrote:
> Hi,
> something strange on one of my (dual xeon) machines.
>
> In the 'resu0001.txt' :
>
>  [Mon Feb 24 14:53:45 2003]
>  M16519367 is not prime. etc.
>
> So, work on this exponent is done.
>
> in the prim0001.log :
>
>  [Wed Feb 05 09:37:01 2003 - ver 22.7]
>  Updating computer information on the server
>  Sending expected completion date for M16519367: Feb 17 2003
>  [Fri Feb 07 16:54:36 2003 - ver 22.7]
>  Getting exponents from server
>  Sending expected completion date for M18103663: Mar 05 2003
>  [Tue Feb 25 09:54:24 2003 - ver 22.7]
>  Sending text message to server:
>  UID: S21786/C904710A1, M18103663 completed P-1, B1=22, B2=5225000,
> WY1: D437A9BA
>  Sending result to server for exponent 18103663
>
> So, its not reported to primenet.
> There is also no 'prime.spl' file.
>
> In the Primenet individial accounts :
>
>  16519367 66   197312032.5  -8.5  51.5  05-Feb-03 08:35  24-Jan-03
> 08:51  C904710A11680 v19/v20
>
> Indeed, primenet knows nothing, and still assumes my machine is busy with
> that exponent.
>
> What to do now, enter it in 'worktodo.ini' manually ?
>
> George
>
>
> _
> Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
> Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Looking for small factors in exponent range 61000-62000

2003-02-13 Thread Brian J. Beesley
On Friday 14 February 2003 01:33, G W Reynolds wrote:
> Just in case anyone is interested, I am concentrating mainly on ECM, but
> will also do some P-1 factoring with B1=10M, B2=1000M and trial factoring
> to 2^60. I am reporting the ECM by email to George, and the rest to the
> primenet server. (Is this OK, or should I report it all via email?)

Well I think that's right ... if you report factoring results to PrimeNet, 
the results get into the database files, so everyone else can see what's 
being done. Unfortunately PrimeNet doesn't understand ECM results.

I do have reservations about running P-1 with B2=100B1. The conventional 
argument says that P-1 is optimal when you spend about as much time in stage 
1 as in stage 2. This implies B2~=30B1 for most exponents.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Why is trial factoring of small exponents slower than large ones?

2003-02-07 Thread Brian J. Beesley
On Friday 07 February 2003 04:00, G W Reynolds wrote:
> I am using mprime 22.12 on a pentium 166 MMX to do trial factoring. For the
> exponents currently being assigned from primenet it takes this machine
> about 12 minutes to factor from 2^57 to 2^58.
>
> I thought I would try factoring some small exponents (under 1,000,000) from
> the nofactors.zip file. I put FactorOverride=64 into prime.ini and started
> mprime as usual but progress is _much_ slower, it will take about 8 hours
> to factor from 2^57 to 2^58.
>
> Can someone tell me why the time difference is so great?

Factors (if any) of 2^p-1 are all of the form 2kp+1, so there are less 
candidate factors to check in any particular range as the exponent increases.

FYI the exponents under 1 million have already had a lot of 
"extra-curricular" factoring work done on them - _all_ have had P-1 run to 
limits much higher than the "economic" values suggested by mprime/Prime95, 
many have already had extra trial factoring done, and some of the smaller 
ones have had substantial amounts of ECM work. I'd expect you to find _some_ 
factors by extending trial factoring even further, but not very many.

If you have a reasonable amount of memory on your system I'd reccomend 
running P-1 on selected exponents just above 1 million - use the pminus1 
database file to direct your work. Otherwise ECM on small exponents, or trial 
factoring on those exponents which have not been done "deep enough" - there 
are a considerable number of these in the 6M - 8M exponent range.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: ECM

2003-02-02 Thread Brian J. Beesley
On Saturday 01 February 2003 07:53, Eric Hahn wrote:
>
> Let's say you've done 700 curves with B1=25,000 to
> find a factor up to 30-digits... and you've been
> unsuccessful... :-(
>
> Now you've decided to try 1800 curves with
> B1=1,000,000 to try and find a factor up to
> 35-digits.
>
> Do you have to start from scratch... or can you
> somehow use the information from attempting to
> find a factor up to 30-digits... to save some
> time and energy... and speed up the search
> process at the same time???

Effectively every curve is "starting from scratch". The only way I can think 
of using information from previous unsuccessful curves is that it's probably 
a Bad Idea to use the same s-value (well, it's a complete waste of time if 
the limits are not increased).

The point here is that the chance of picking a previously-used s-value is 
infinitesimal, providing the "random" number generator on the system is 
reasonably behaved.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: poaching

2003-01-29 Thread Brian J. Beesley
On Wednesday 29 January 2003 01:07, Paul Missman wrote:
>
> You bring up an interesting point about the software, I suppose.  I never
> thought that George or Scott considered the software proprietary. 

This whole area is a legal minefield ... Even open source software can be 
proprietized, e.g. the mess Microsoft have made of TCP/IP starting from the 
published BSD source.

> I'd
> think that a basic Lucas-Lehmer type software could be written without too
> much headache, though I've never tried my hand at it.

If you try, let us know how you get on.

Making the thing _work_ is easy enough. Making it tolerably efficient, say 
better than 50% the speed of Prime95/mprime (or Glucas or Mlucas on other 
hardware) is a different matter altogether. Assuming you're starting from 
scratch, not just filching other people's DWT code.
>
> I do wonder at your assertion that, were I to discover a large prime by a
> self written program, I would have to publish the program along with the
> discovered prime.  I'd imagine that, as long as the number could be
> verified by independent means, it would be an publishable fact.

Not being authorized to speak for EFF, my guess is that you may be right. 
However, what "independent means" are you going to use for the demonstration, 
if not established LL testing software? The numbers we're talking about are 
so large that other methods e.g. ECPP are _at least_ several orders of 
magnitude too slow to be viable, even if you had a large grid of 
supercomputers at your disposal. In any case, $100K is a lot more than you'd 
expect to be able to sell a LL testing program for - so disclosure of the 
source (at least the key working parts of it) would seem to be sensible.
>
> I'll admit that I didn't follow this poaching thread from the beginning.  I
> just noticed much more than the normal volume of Mersenne email, and
> decided to see what was up.  The Idea that someone can "poach" a number
> still strikes me as humorous.  It is a bit like me trying to copyright the
> number1234567890.  I doubt that the claim would hold much water if tested
> in a court of law.

This is a very interesting issue. The software, music and film industries are 
trying very hard to do just this sort of thing - and having considerable 
success in the courts. (Patenting software, and extending the life of 
copyright for absurd times). Don't forget that a digital recording, like 
anything else stored in a computer, is nothing more than a string 
of 0 and 1 bits, i.e. it can be thought of as simply a very large number. 
Don't forget also the hoo-ha over the publication of a prime number specially 
crafted to contain the source code of the DeCSS algorithm.
>
> Is the negative impact here that large groups of numbers are being tied up
> for unreasonable lengths of time?  Or is it that some lucky person might
> just happen to stumble on a large prime, and publish it, while someone in
> GIMPS/Primenet had it checked out for testing?

I thought the negative impact was that a user assigned a job of work may be 
discouraged by finding that job of work made irrelevant by someone else doing 
the same job _and reporting the result to the same authority_. Especially 
when the "poacher" is using the statistics tables published by the project to 
select the jobs (s)he is going to "poach".

The fact that the lowest outstanding exponent in some (arbitary) class may 
remain fixed for a long time doesn't bother me, but I can understand how it 
might irritate some people.

As for the chance of a "poacher" "poaching" a prime discovery - well, at best 
these are much the same as anyone else's; in practise they're probably a lot 
worse, as many of the exponents being "poached" will have been recycled due 
to possible errors in a previous run, possibly more than once by the time the 
exponent is close enough to the "trailing edge" to be a "tempting target".
>
> If the latter, I'd have to say, in my mind, finding a large prime is pretty
> much a crap shoot.  However, I'd support some sort of reasonable timeout on
> the "ownership" of numbers checked out from the database.  

Yes. And there is at present a sort of "timeout", it's just that it's 
possible to work around this by checking in every few months without actually 
doing any work.

> Also, if I had
> some magic insight into the probability of a particular number turning up
> prime, I'd probably want to test it, even if I didn't "own" it.

Good luck to you. My guess is that, if your "magic insight" was based on 
mathematics rather than "gambler's intuition", the "insight" would be worth 
considerably more than the actual discovery of the prime.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Mersenne Digest V1 #1039

2003-01-28 Thread Brian J. Beesley
On Tuesday 28 January 2003 06:08, Mary K. Conner wrote:
>
> I'm speaking of triple or higher checks where all residues
> agree.  The only reason to do those other than the exponents that have only
> 16 bit residues is to check for cheating.  If those kinds of checks need to
> be done, they ought to be done with intelligence, not by random poaching.
>
As someone who has done a fair number of these, and is continuing to run, I 
think I am operating responsibly. I am selecting for triple-checking those 
exponents where both (or all) the entries in lucas_v.txt were contributed by 
the same user id.

So the only way I would be "poaching" is if someone else has already 
"poached". 

Naturally the situation will sometimes arise by chance that LL test and DC on 
a particular exponent happen to be assigned to the same user, particularly 
those who process a lot of exponents. I'm also aware there are other ways of 
cheating, but this method seems likely if the idea is to boost league table 
rankings.

As it happens, I have not found any evidence of cheating, but I have exposed 
a problem which resulted in a few (very few, well, two to be exact) exponents 
being accepted as "double checked" resulting (I think) from a single run 
being reported twice.

This work is complete up to the mid-5 million range; the leading edge is just 
below 6 million. The total number of exponents involved is not enormous.

As I proceed, I'm also completing any trial factoring which might have been 
missed, and running P-1 to "high memory LL test limits". A number of 
exponents have been eliminated from lucas_v by finding a factor.

Incidentally, one of the factors I found (but only one, so far) has appeared 
in my PrimeNet personal status report. I don't know why this should be.

I have a couple of people working with me on this. If anyone else would like 
to get involved, please e-mail me. But don't expect exponents significantly 
smaller than those you might get for normal DC assignments.

I'm also working, at low priority & again with a couple of helpers, at 
completing triple-checking for all small exponents (under 1 million). Why? So 
far as I'm concerned, it's something useful for a couple of slow systems to 
do whilst they're acting as room heaters! I certainly don't regard this 
sub-project as "important", it's just that the systems I'm employing are too 
slow to be of much use to PrimeNet, even for factoring assignments.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Poaching -- Discouragement thereof

2003-01-26 Thread Brian J. Beesley
On Sunday 26 January 2003 19:55, Mary K. Conner wrote:
>
> [ big snip - lots of _very_ sensible ideas!!! ]
> 
> Primenet, and Primenet should preferentially give work over 64 bits to SSE2
> clients, and perhaps direct others to factor only up to 64 bits unless
> there aren't enough SSE2 clients to handle the over 64 bit work (or if the
> owner of a machine asks for over 64 bit work).

Umm. Last time I checked, it seemed to be a waste of an SSE2 system to be 
running trial factoring ... the LL testing performance is so good that they 
really should be doing that.

If you calculate 
(P90 cpu years/week factoring)/(P90 cpu years/week LL testing)
then I'll think you'll find PII/66 MHz FSB & PPGA Celeron systems are the 
"best" trial factoring systems.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: GIMPS Forum

2003-01-26 Thread Brian J. Beesley
On Sunday 26 January 2003 06:11, Rick Pali wrote:
> [... snip ...]
> that *everything* on the site is copyright by the owner. No exception is
> made for the forums. They even go so far as do reject liability for what
> people write, but seem to claim ownership non-the-less.

IANAL but I don't think the combination of ownership & disclaimer would 
convince a court. If you claim you own the content then you become liable for 
legal action in the event that someone posts defamatory or illegal content, 
or breaches someone else's copyright by posting copyrighted material without 
proper consent.

IASNAL but I think the correct thing to do on a forum - unless the contents 
are _strictly_ moderated _before_ being posted in public - is to have each 
individual author retain copyright of his/her contributions. It's then up to 
each contributor to take action for any breach of copyright if and when they 
see fit. Obviously the act of posting to a forum gives the forum operator the 
right to make the content available to the public.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Poaching -- Discouragement thereof

2003-01-25 Thread Brian J. Beesley
On Saturday 25 January 2003 02:07, John R Pierce wrote:
> > But, no, you won't be able to complete a 10M on a P100 ;-)
>
> my slowest machine still on primenet is a p150 that has 60 days to finish
> 14581247, its been working on it for about 300 days now, 24/7, with nearly
> zero downtime.  2.22 seconds per iteration, yikes.
>
> I probably should retire this box after it completes this one, its still
> running v16 :D

Obviously if such a change were made one would expect a "period of grace" to 
accomodate assignments already started to complete.

On Saturday 25 January 2003 00:42, Nathan Russell wrote:
>
> Does this apply to 10M assignments?

I don't see why not.
>
> The machine I used until earlier this month, a P3-600, couldn't do those in
> much under 6 months, and some machines which were sold new around 2000 are
> unable to do them in a year.
>
Yes. But given that there is plenty of work left which can usefully be run on 
systems a lot slower than P3-600, and that the fastest PC systems currently 
available can run a 10M digit range LL test in about 4 weeks, I'm not sure it 
is sensible to be running 10M digit assignments on P3-600s any more.

On Saturday 25 January 2003 00:39, Mikus Grinbergs wrote:
> [... snip ...]
> What I am saying is that having an assignment expire after a year
> does not get at the root of the problem.  Even if an assignee could
> perform the work in 15 days start-to-finish, a poacher with a Cray
> might decide to intervene anyway.

But in my experience the majority of poaching is connected with running tests 
on the lowest outstanding exponents irrespective of the fact they're assigned 
to someone else.
>
> My suggestion is that in order to receive "credit" for their work,
> everybody MUST "register" what they are doing.

Sure. But does this address the problem?

> And the registration
> process must refuse to give out duplicate assignments.

I wasn't aware that it did. But what is the objection to having both LL test 
and double check for a particular exponent assigned simultaneously? If we're 
done looking for factors, we need the results of both runs eventually.

BTW what about another problem I have come across on several occasions, 
namely "reverse poaching"? This is when I have properly got an assignment 
which someone else has let expire, but the original assignee reports a result 
whilst I'm working on it?

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Communication between GIMPS Forum and Mersenne mailing list

2003-01-25 Thread Brian J. Beesley
On Saturday 25 January 2003 05:38, Michael Vang wrote:
>
> Well, to be honest, not much more can be done... As it is now, we have
> several mechanisms in place to enable people with dialup access the
> ability to log on and get done right quick...

What about posting (a digest of) forum messages on the list, a la SourceForge?
>
> 1) There are no heavy graphics usage... (If I were paying for access I'd
> have graphics turned off anyways!)
> 2) Everything is GZIP compressed...
> 3) You can have email notifications of new posts...
>
> I was stuck on dialup for a week recently and was able to keep up with
> the forum with just 5 minutes of reading a day... And I don't read all
> that fast... And it was a 33.6 connection...
>
I find the major problem is the awkwardness of going on & offline when 
composing contributions, especially replies.

Having been an Internet user for > 20 years, I think "store & forward" rather 
than "instant messaging". That's my problem, not yours.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Poaching -- Discouragement thereof

2003-01-24 Thread Brian J. Beesley
On Friday 24 January 2003 02:27, Richard Woods wrote:
>
> Let's put it this way:  Maybe you don't give a fig for fame, but
> some of the rest of us do.  A chance at real, honest-to-gosh
> mathematical fame has a value not measurable in CPU years, but
> poaching steals that.
>
So what we want is a declaration that in the event of a prime being found, 
kudos etc. goes to the official owner of the assignment, even if a poacher 
finishes first.

The only problem here is that I could make it almost certain that I would get 
the kudos for a prime discovery by grabbing a _very_ large batch of 
assignments, checking them in regularly but not actually doing any work on 
any of them, until a poacher finds the prime for me ... but that's cheating, 
too.

I think perhaps what may be needed is a new "rule" that users who don't 
complete assignments in a reasonable period of time (say 1 year?) should lose 
the right to the assignment, even if they do check in regularly. This should 
discourage poaching by removing the motive, and also improve the quality of 
the work submitted - "random glitches" do occur; with run times much over one 
year, I would think it fairly likely that a "glitch" would lose you the 
chance of a prime discovery even if you had the right assignment. (What a 
sickener that would be!)
>
> > I might also point out that there is sufficient information in the
> > other reports (particularly the hrf3 & lucas_v database files) to
> > enable the poachers to be identified,
>
> Better read my proposal again.  Its intent was to _PREVENT_ (not
> perfectly, but to some extent) poaching, not identify it after-the-fact.

Police forces can't _PREVENT_ crime, but they can discourage it. If you can 
identify the poachers, then you could ask them (politely or otherwise) to 
desist, or "name & shame" them.
>
> BTW, exactly which data fields in either the HRF3.TXT or LUCAS_V.TXT
> file provide information about currently-assigned, in-progress,
> incomplete assignments (which are the poachable ones)? 

The hrf3 & lucas_v database files identify those who have submitted results 
for each exponent (except when a factor has been found). So if you have had 
an assignment poached ...

As you correctly point out, they don't contain direct pointers to poachable 
assignments - though, with a list of primes, a list of exponents in the 
factors database, a list of exponents in hrf3 & a list of exponents in 
lucas_v, you can fairly easily derive a list of the lowest exponents which 
are short of a LL test or double-check - these are in all probability 
assigned & therefore poachable.

> I asked this
> in the GIMPS forum, but haven't seen any answer there yet.  So will
> you please point out what I overlooked?

Sorry, I don't read the forum. It's inconvenient & expensive for those of us 
that have pay-as-you-go dialup access; whilst I'm at work I simply don't have 
the time to mess with such things. There is no cable service within 40 miles 
of me, the nearest ADSL-enabled exchange is about the same distance away, 
satellite "broadband" is ferociously expensive & still depends on a dial-up 
line for outbound data; the density of population round here is such that 
there's almost no chance of a wireless mesh working, either. 

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Poaching -- Discouragement thereof

2003-01-23 Thread Brian J. Beesley
On Wednesday 22 January 2003 22:50, Richard Woods wrote:
> Here's what I've just posted in the GIMPS Forum.
>
> - - -
>
> _IF_ PrimeNet has automatic time limits on assignments, ordinarily
> requiring no manual intervention to expire assignments or re-assign
> them, then why would any GIMPS participant, other than a system
> administrator or a would-be poacher, need to know someone else's:
>
> (a) current iteration,
>
> (b) days-to-go,
>
> (c) days-to-expire, or
>
> (d) last date-updated?
>
> If there's no non-poaching non-administrating user's need-to-know for
> those items, then just stop including them in public reports. Include
> them only in administrative reports and private password-requiring
> individual reports.
>
> That would deny target-selecting information to would-be poachers,
> right?

Sure. So would eliminating the report altogether. 

I detect righteous indignation here. Might I respectfully point out that, if 
you stick to "LL test" assignments, it makes practically no difference 
whether you get "poached", since there will eventually have to be a 
double-check anyway.

I might also point out that there is sufficient information in the other 
reports (particularly the hrf3 & lucas_v database files) to enable the 
poachers to be identified,

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Top 1000 & connection to server

2003-01-23 Thread Brian J. Beesley
On Wednesday 22 January 2003 07:57, Denis Cazor wrote:
> Hello,
>
> for my part, I looked for my place in the top 1000 list
> - on www.mersenne.org /top.html my place is 388 this week with
> 100 LL tests.
> - on mersenne.org/ips/topproduccers.shtml (updated hourly)
> I found to be at the 26444 place, credited with only one LL test.
> (difficult to find my place :-).
>
> What are the different strategies ?

top.html is George's list, which includes manual submissions but does not 
include any results for exponents where factors have been found.

topproducers.shtml is based on PrimeNet automatic notification for results 
relating to assignments issued by PrimeNet.
>
> Another problem is when I connect to the server. My computer is sending
> (at each connection) all the results I found in more than one year,
> instead of the last one.
>
> How can I reset the list after a connection ?

This _sounds_ like you keep turning on "Use PrimeNet" in the Test/PrimeNet 
menu, then turn it off again.

What I would suggest is:

(1) rename results.txt to something else (results.old seems reasonable)

(2) turn on "Use PrimeNet" (if it is off at the moment) & leave it turned on

(3) if you have a dial-up connection & don't want automatic dial-out whenever 
the client wants to talk to the server, set "Do not contact PrimeNet server 
automatically" in Advanced/Manual Communication. Set "Contact now" to 
manually connect, send whatever results are available & collect more 
assignments if neccessary.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Problem of the year(s)

2003-01-05 Thread Brian J. Beesley
On Saturday 04 January 2003 04:08, [EMAIL PROTECTED] wrote:
>  A Mersenne number M_p = 2^p - 1, where p is prime and p < 1000,
> has a prime divisor q with q == 1 (mod 2002) and q == -1 (mod 2001)
> [== denotes congruent].  Find q mod 2003.

This problem is ill-defined. There is a considerable number of prime p < 1000 
for which not all factors of 2^p-1 are known; indeed there seems to be a set 
of three such p for which _no_ factors are (currently) known (809, 971, 997).

I don't see any reason why any of the missing factors should not have the 
form stated in the problem.

Therefore, even if there is a unique solution based on the currently known 
factors, this may not remain so as new factors are discovered.

For those of you who want to persevere with the problem as stated on the 
basis of currently known factors: hint, don't forget the cofactor when the 
published prime factors are divided out; sometimes this is known to be prime.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Has something gone wrong with the mersenne.org server? (yes, again)

2002-12-08 Thread Brian J. Beesley
On Saturday 07 December 2002 23:45, Barry Stokes wrote:
>
> Tried to get to my individual account report again, and this time was
> greeted with this:
>
> "Insufficient system resources exist to complete the requested service. "
>
> Anyone else getting the same?

Yes. Around 0700 GMT yesterday (7th) I found I couldn't check in results, 
then when I tried to check my personal account I was getting a "Couldn't load 
DLL" problem. Later in the day this changed into "Insufficent resources".

The general status reports still work but seem to be frozen at 0500 GMT 
yesterday.

My guess would be that some clown has successfully executed a DoS attack but 
an underlying problem, like a memory leak or disk space exhaustion, could 
possibly lead to similar problems.

Guess we have to live with it until Brad gets in to work tomorrow.

Regards
Brian Beesley

> Jacquin's Postulate on Democratic Government:
>   No man's life, liberty, or property are safe while the
>   legislature is in session.

Or afterwards, if they legislate a mechanism for enforcement.
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-12-05 Thread Brian J. Beesley
On Wednesday 04 December 2002 21:46, Daran wrote:
> [... snip ...]
> > ...though I think there needs to be a
> > careful analysis as to what the extra computation time for actual E
> > values might be...
>
> I agree.  My tests have been limited to exponents in the 8.1M range, for no
> particular reason than those are the ones I am doing.

Well, you seem to have more experimental evidence than anyone else.

As for the theory - whilst there is adequate theoretical modelling of the 
expected distributions of the largest and second-largest factors of arbitary 
numbers, I couldn't find much in the literature which would help predict how 
many extra factors you would expect to find with different values of E.

There is obviously a tradeoff here between increasing B2 and simplifying E 
and increasing E compensating for increased run time by lowering B2. However 
it does seem to be obvious that increasing E always has to be paid for in 
increased memory requirements.

For exponents around 8M, this is not a particular issue. However there is a 
real, practical constraint so far as Prime95/mprime is concerned - the entire 
_virtual_ address space is limited to 4 GBytes by the 32 bit address bus, and 
the OS kernel claims some (usually half) of this, so that the total memory 
usable by a single process is limited to 2 GBytes. (There is a "big memory" 
variant of the linux kernel which expands this to 3 GBytes, but the point 
still stands).

Since, on my practical experience, a 17M exponent will quite happily use ~ 
800 MBytes in P-1 stage 2, the 32 bit address bus may well be a limiting 
factor within the exponent range covered by current versions of 
Prime95/mprime.

George - is there a "sanity check" on the memory constraints?
>
> > If we _don't_ have to worry about memory, at some point it becomes
> > cost-effective to run ECM with small limits instead of P-1 with much
> > larger limits. And ECM can "easily" dig out some factors which are more
> > or less inaccessible with P-1.
>
> I was under the impression the ECM was only practical for small exponents
> well below the current DC range.

ECM stage 2 quickly becomes impractical with larger exponents because of the 
memory requirement. ECM stage 1 is not particularly heavy on memory. Running 
stage 1 only with small limits on DC sized exponents is feasible ... it's 
just a question of whether the extra computation costs would be justified by 
the discovery of factors which were inaccessible to trial factoring or P-1.
>
> > [... snip ... I don't disagree but the basic argument is the same as
> > above]
> >
> > > In 2 out of the 29 stage 2 factors I have found so far using E=4, k has
> > > not been smooth to B2.  This suggests that increasing E from 4 to 12
> > > could yield about 20% more factors.  I've done a few tests with a
> > > modified and recompiled client, which suggests that it would worth it
> > > even if E=12 yielded as few as 10% more factors, though I need to
> > > investigate this further.
> >
> > That's a very small sample.
>
> It's the only sample I have.  I'm trying to increase it by doing some P-1s
> on exponents in the 1.2M range which have only been tested to B1=B2=1000.
>
> How many of these were found during stage 2?  (If half your factors were
> found during P-1 stage 2, and half of those used E=4 or greater, then your
> single 'surprising' factor would not be out of line with my two.)

Well, actually I was doing the test in several steps, with gradually 
increasing B1 then gradually increasing B2 - the cost of the GCDs with small 
exponents is very small so it's worth checking fairly frequently to see if a 
factor is "available".

I don't have the full data to hand but I do have some of it. The distribution 
of 22 factors found at various limits was as follows:

stage 1 B1 = 50  1
stage 1 B1 = 100 1
stage 2 B1 = 100 B2 = 4004
stage 2 B1 = 100 B2 = 1000   5
stage 2 B1 = 100 B2 = 2500  11

Some "easier" factors were in all probability "missed" because someone had 
found them by running P-1 with smaller limits before I started.
>
> I have a total of 57 factors, including one found earlier today.  A few
> were by TFs, 30 in P-1 stage 2 (including today's) and the rest in stage 1.

OK. Actually for about the last three weeks I've been running P-1 with 
"standard limits" on some exponents in the range 2M-6M (those exponents where 
all the entries in lucas_v have the same user ID, with the exception of a 
very few where P-1 was already completed to reasonable limits).

The system I'm using is configured with mem=224 MBytes (about as much as I 
dare on a 512 MBytes dual-processor system). I'm getting E=4 logged fairly 
consistently.

The results so far are:

No factor found, 130
Factor found in stage 1, 2
Factor found in stage 2, 6 - all "smooth" to B limits used.

One of the factors found in stage 1 is _very_ interesting:

6807510023694431 is a factor of M(59937

Re: Mersenne: P-1 and non k-smooth factors

2002-12-04 Thread Brian J. Beesley
On Tuesday 03 December 2002 22:31, Daran wrote:
> [... snip ...]
> For clarity, let's write mD as x, so that for a Suyama power E, the
> exponent (x^E - d^E) is thrown into the mix when either x-d or x+d is prime
> in [B1...B2], (and only once if both are prime).  This works because
> (provide E is even) x^E - d^E = (x-d)*(x+d)*C where C is a sum of higher
> order terms. The benefit of prime-pairing arises when E=2.  The cost of
> higher E is AFAICS linear in multiplications.  The benefit of higher E
> comes from any additional factors thrown into the mix by C.  This benefit
> is greatest if C has factors slightly > B2
>
> For E=4, C = (x^2 + d^2)
> For E=6, C = (x^4 + x^2d^2 + d^4) = (x^2 + xd + d^2)*(x^2 - xd + d^2)
> For E=8, C = (x^2 + d^2)*(x^4 + d^4)
>
> I can't think of any reason why either of the two algebraic factors of C
> when E is 6 should be any better or worse than the single irreducible
> factor when E=4.  And there are two of them.  This suggests to me that E=6
> should be about twice as effective as E=4 in providing additional factors,
> at about twice the cost (over and above the 'cost' of E=2).  If this is
> correct, then it will always be worth going to E=6, whenever it is worth
> going to E=4, (provided there is sufficient memory to do so).

Let's see if I get this right.

Overwhelmingly, the factors produced by P-1 factoring come out because they 
are smooth to the limits selected. The fraction that comes out because of the 
extension is << 10%. To double that fraction (i.e. to increase the total 
number of factors found by < 10%) we have to double the stage 2 run time?
Doesn't sound that great to me, even without worrying about memory 
considerations.

If we're talking about the _extra_ computation time in stage 2 then obviously 
the suggestion makes a lot more sense - though I think there needs to be a 
careful analysis as to what the extra computation time for actual E values 
might be (as opposed to a rather simplistic linear model, which fails to take 
into account that some of the "temporaries" needed for small E probably drop 
out pretty well "for free").

If we _don't_ have to worry about memory, at some point it becomes 
cost-effective to run ECM with small limits instead of P-1 with much larger 
limits. And ECM can "easily" dig out some factors which are more or less 
inaccessible with P-1.
>
[... snip ... I don't disagree but the basic argument is the same as above]
>
> In 2 out of the 29 stage 2 factors I have found so far using E=4, k has not
> been smooth to B2.  This suggests that increasing E from 4 to 12 could
> yield about 20% more factors.  I've done a few tests with a modified and
> recompiled client, which suggests that it would worth it even if E=12
> yielded as few as 10% more factors, though I need to investigate this
> further.

That's a very small sample. 

Some time ago I found a considerable number of first factors for exponents in 
the range 100,000-150,000 using P-1 with limits up to B1=10^6, B2=25x10^6. 
The results.txt file doesn't record the E value used; though I did have tons 
of memory available (in relation to the exponent size) and seem to remember 
something about wondering what E=12 meant in the console output. Or maybe I'm 
confusing this with recollections about running ECM?

My records show 67 factors found; I mailed George on one occasion because P-1 
found a factor which surprised me, but I don't think it happened twice.

Incidentally I found a factor only yesterday using P-1 on a production system:

[Tue Dec  3 07:54:38 2002]
P-1 found a factor in stage #2, B1=22, B2=5665000.
UID: beejaybee/Procyon, M17359099 has a factor: 312980494172935109497751

Again no mention of E. If it helps, this system was set up to use 384 MBytes 
memory. In any case this should have come out without extensions; B1=65267 
B2=3077953 is sufficient to find the factor with the "standard" stage 2 
algorithm.

Would there be any means of retrieving actual factors found using P-1 and the 
E values used from the server logs? The problem otherwise is that, so far as 
the database is concerned, once a factor is found, nobody cares much how!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: christmas computer system?

2002-11-25 Thread Brian J. Beesley
On Monday 25 November 2002 12:36, you wrote:
>
> One should basically not use a CD-R/CD-RW as a general CD reader, since it
> usually has way lower MTBF than a normal CD/DVD reader, and is more
> expensive. Ie. it breaks a lot earlier if you use it a lot, and it's more
> expensive to replace :-)

Did you check out the manufacturer's claimed MTBF? A sample I checked out 
showed no significant difference between CD, CD-R & CD-RW drives.

I've _never_ had any problems with CD-RW drives on linux systems, except for 
an old model which refuses to write some high-speed CD-R media above 2x, 
though it appears to write medium-speed media at its limit of 4x quite 
happily.

On Windows systems, woe & despondency. Basically I gave up Roxio as a total 
disaster. Nero is better, but every so often after writing a CD it refuses to 
read anything in the same drive until system reboot. Never had any genuine 
hardware problems, though.

Playing a lens cleaning CD occasionally does no harm. Most PCs tend to suck 
air and dust in through gaps in the front panel, this can cause a buildup of 
crud on the RW head.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: christmas computer system?

2002-11-24 Thread Brian J. Beesley
On Sunday 24 November 2002 18:47, John R Pierce wrote:
>
> my shopping list for a reasonably priced high quality P4 right now is, with
> prices from my local cloneshop (not the cheapest place, but good
> service)...
>
> $213  Intel Retail P4-2.4"B" (these have the 533MHz bus)

2.53B should be very little more expensive. Last time I checked the first big 
price step was still between 2.5A/2.53B and 2.6A/2.66B.

The other point here is that P4s don't seem to benefit much (if at all) from 
533MHz FSB except with 533 MHz RDRAM. Even at 400 MHz, DDR throttles the 
CPU/memory bus to the point where the CPU/chipset clock rate is unimportant.

Consider spending a bit more on a Zalman Flower Cu/Al S478 heatsink & 
variable speed fan. Even with the fan turned down to minimum, at which speed 
it is truly inaudible, it's at least as effective as the "retail box" HSF.

> $133  Asus P4PE/L (i845pe chip, integrated ethernet and audio)
> $158  Samsung 512MB PC2700 DDR SDRAM
> $163  Asus V8420 GF4 Ti 4200, 128MB

Personally I _love_ the Matrox G400/G450/G550 video card. Cheaper, 
unsurpassed 2D display, drivers superbly stable. But then I'm not a games 
freak.

> $113  Seagate Baracuda ATA IV 80GB disk (super quiet, very fast)
>  $48  Toshiba 16X DVD-ROM

I could easily do without the DVD drive.

>  $83  Teac CDW540E 40/12/48 cd-rw burner

Seems a bit expensive. There's little point in going for superfast 
write/rewrite performance; either you can't get media much faster than 24xW / 
8xRW, or you can't justify the price premium.

>   $8  Mitsumi floppy

If I can boot from a CD, I don't need a floppy drive any more. Spend the 
money on a couple of round IDE cables instead. (Neater, less air flow 
obstruction)

>  $78  Enlight mid-tower

IMHO money spent on the case is money well spent. On my previous experience 
with Enlight cases, if you buy one you should also buy a box of sticking 
plasters. I always seemed to end up with shredded fingers whenever I worked 
inside them. There are cheaper, nastier cases with even sharper edges, but I 
don't reccomend them, either.

These days I use Coolermaster ATC200 (comes fitted with 4 quiet 8 cm fans) 
and Enermax 350W PSU (dual fan; thermally controlled inlet, manual variable 
speed exhaust). I don't know the current US price but the case & PSU come to 
around £180 retail here (so probably $180ish?). For that you get an all-metal 
case with superb build quality, excellent cooling with barely audible fan 
roar and really smart styling. It's a normal-size mini-tower case but will 
accomodate _at least_ 6 3.5" drives without touching the 3 5.25" bays.

A good case will survive a few PC generations.

>  $23  Microsoft Intellimouse Optical OEM
>   $9  Mitsumi generic 104 keyboard

I _hate_ membrane keyboards & would not pay even 9 cents for any of them...
If you can, get a genuine late -1980s IBM PS/2 keyboard from a scrapyard / 
car boot sale rather than buy a new one. It will be more reliable as well as 
much more comfortable to use. I think Cherry still make a proper mechanical 
switch keyboard with a decent action, but it's _very_ expensive (well over 
£50). 

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: christmas computer system?

2002-11-24 Thread Brian J. Beesley
On Sunday 24 November 2002 15:55, you wrote:
(B> I'm giving my brother's family a new computer for christmas.
(B> He'll buy it from a local (to him) 'white box' pc store and I'll
(B> pay for it.  I am a little concerned about performance because
(B> the pc will probably be running GIMPS and I'd like to get my
(B> money's worth.  It's easy to request a P4 in the 2.0-2.5 range
(B> and 256M-512M of memory but I've read of the bottleneck caused
(B> by slow memory and the bus between memory and cpu and I don't
(B> know what to specify or how to evaluate components in this area
(B> or measure performance after the pc is built.
(B>
(B> Any comments or suggestions?
(B
(BWell - one of the key things with P4 systems is that the processor needs to 
(Bbe properly cooled. If it isn't, the reliability will probably be OK, but the 
(Bperformance will suffer as the processor goes into thermal throttle mode.
(B
(BUnfortunately many "fashion" systems come in unreasonably small cases with 
(Bgrossly inadequate ventilation, and an underspecified heat sink. Most users 
(Bnever notice that the system is throttled, because even at quarter speed, a 2 
(BGHz P4 is more than powerful enough for everyday tasks like word processing, 
(Beven using M$'s mega-bloated packages. Also, shifting sufficient air to cool 
(Ba P4 (or a high-end Athlon) properly means either a large, heavy and 
(Bexpensive heat sink, or a noisy fan.
(B
(BBasically my advice would be to go for build quality rather than raw 
(Bperformance. If the thing is properly built, you will at least be able to do 
(Bsomething about any deficiencies. 
(B
(BI would reject out of hand any package containing peripherals which are 
(Bdesigned to work only with Windows, or which has Windows pre-installed with 
(Bonly a "system recovery" CD rather than proper installation media. I'd far 
(Bprefer to take a system without Windows bundled at all, but M$ tries to make 
(Blife miserable for system builders who try to avoid the "Microsoft tax".
(B>
(B> I want them to have a fast machine but not on the bleeding
(B> (expensive) edge.
(B
(BThe problem here is that most "big name" systems builders keep changing the 
(Bimportant components (motherboard, graphics card etc) to suit their profit 
(Bmargins and stock availability. You will find apparently identical systems, 
(Bwith the same "top line" specification, are quite different inside and may 
(Bhave very different performance.
(B
(BSo ... look for something that at least tells you what the mainboard chipset 
(Bis. This will give you a good indication of the performance _potential_ and 
(Bupgradeability of the system. 
(B
(BPersonally - although this is distinctly unfashionable - I far prefer Rambus 
(Bmemory - so far as GIMPS is concerned, a 2.53 GHz P4 using PC1066 Rambus 
(Bmemory (implying the i850E chipset) will outperform _any_ system up to 2.8 
(BGHz using DDR memory. The point here is that the total memory bandwidth of 
(Bthe Rambus system is 4200 Mbytes/sec, whereas even using 400 MHz DDR memory 
(B(just as rare & expensive as PC1066 Rambus memory) you are only going to get 
(B3200 MBytes/sec.
(B
(BIf you're forced to a DDR based system, then look for the Intel 845PE 
(Bchipset. (This also supports the new P4 processors up to at least 3.06 GHz 
(Bwith hyperthreading support enabled). With chipsets using DDR, there is 
(Bpotential for inefficiency by using slow memory - e.g. KT400 chipset should 
(Balways be used with 400 MHz memory (PC3200), KT333 with 333 MHz memory 
(B(PC2700) etc. This is because the "gearchange" caused by using slower memory 
(Bthan the chipset supports kills performance. Installing faster memory than 
(Bthe chipset supports is OK (except that Rambus systems supporting only PC800 
(BRDRAM do _not_ work with PC1066 memory) but not often done for reasons of 
(Bcost.
(B
(BThe problem here is that few "big name" manufacturers will tell you the 
(Bspecification of the components have gone into a system, and even fewer sales 
(Bassistants in the retail stores where they're sold will understand 
(Bintelligent questioning. (They _may_ understand raw MHz, disk capacity, 
(Bmonitor size etc., but that's about the limit of what you should expect.)
(B
(BIf you really want performance, and are prepared to cut back on things you 
(Bdon't need (superfluous peripherals, bundled software etc), then it is 
(Bpractically essential to "roll your own", or (at a premium) go to a small, 
(Bspecialist systems builder armed with a specification. Building the thing 
(Byourself is satisfying, too.
(B>
(B> Other than GIMPS I'm sure my nephew will be the heaviest load
(B> when he plays games on the new machine.
(B
(BSo you want a darned good graphics card ... a weak graphics card will kill 
(Bgames performance much more than a poor mainboard/processor/memory 
(Bcombination. Many games players also insist on a good sound system.
(B>
(B> I think I should

Re: SV: SV: Mersenne: Drifting UP(!) in Top Producers ranking?

2002-11-23 Thread Brian J. Beesley
On Saturday 23 November 2002 02:41, Torben Schlüntz wrote:
> [... snip ...]
> Sorry Nathan. It is my fault you read  the IMHO paragraph in a wrong
> way. I meant I had that point of view UNTIL I discussed it.. As
> George argue:  Nobody would do LL if a succesful TF was rewarded the
> same - he is truly right.

>From the point of view of the project, the objective is to find Mersenne 
primes. Finding factors, like completing LL tests returning a non-zero 
residual, only eliminates candidates.

However, from the point of view of league tables, it seems to make sense to 
award effort expended (in good faith); otherwise there would be only two raks 
in the table: those who have found a prime, and those who haven't!

> My goal is to get the succesful TF rewarded a bit higher. As it is now
> someone might skip the 57-65 range and only do the 66-bit part, thus
> missing factors and get fully rewarded for only doing half the work.

This is not a particularly effective cheat; you still end up having to do 
significantly more than half of the computational work. Is there any evidence 
that this may be happening? 

Does it make sense to impose a "penalty clause" i.e. if someone subsequently 
finds a factor in a range you claim to have sieved, you lose 10 times the 
credit you got for the assignment? N.B. There will be _occasional_ instances 
where an "honest" user misses a factor, possibly due to a program bug, 
possibly due to a hardware glitch.

> [... snip ...]
> Composite exponents was removed long before the project. Lucas must have
> known the exponent needed to be prime. I believe a Mersenne number has
> to have an exponent which is a positive integer?! The exponents above
> 79.300.000 are still candidates, though George has chosen to limit his
> program to this size and I think with very good reason.

Hmm. As it happens, one of my systems has just completed a double-check on 
exponent 67108763. This took just over a year on an Athlon XP1700 (well, 
actually it was started on a T'bird 1200). The fastest P4 system available 
today could have completed the run in ~3 months. The point is that running LL 
tests on exponents up to ~80 million is easily within the range of current 
hardware.

Personally I feel it is not sensible to expend much effort on extremely large 
exponents whilst there is so much work remaining to do on smaller ones. I 
justify running the DC on 67108763 as part of the QA effort.
>
> BTW, the list of found factors contains 2.500.000+ but the "top
> producers list" only contains 30.000- of these. GIMPS must be
> responsible for far more than only 30.000 factors. Any explanation for
> that?

Well, there are a lot of factors which can be found by algebraic methods 
rather than by direct computation: e.g. if p+1 is evenly divisible by 4, and  
p and 2p+1 are both prime, then 2^p-1 is divisible by 2p+1. Also, there are 
more efficient methods of finding _small_ factors (up to ~2^48) than 
individually sieving for each exponent.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Poach?

2002-11-19 Thread Brian J. Beesley
On Tuesday 19 November 2002 16:21, George Woltman wrote:
> At 01:30 PM 11/19/2002 +0100, [EMAIL PROTECTED] wrote:
> >Last week this was a 1st test assignment, now it's a double check?
> >Unfortunately there was a server sync in the meantime, so I can't check
> > the cleared.txt. But I find in hrf3.txt:
> >
> >11976787,berra,WV1
>
> The berra test had errors and the exponent was re-released for first-time
> testing.  I did this by manually setting the exponent's state.  My guess is
> the database sync caused the server to once again notice it has a result
> for the exponent and now flags it as a double-check.

Surely this really doesn't matter. If the first test had errors, there is 
still a reasonable chance of discovering a prime - at any rate, as good as 
running a LL test on a 17M range exponent. Also, PrimeNet will still credit 
you; there is no distinction between credit for a first LL test and credit 
for a double-check.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: CPU type wrong on account report

2002-11-15 Thread Brian J. Beesley
On Friday 15 November 2002 00:23, Ryan Malayter wrote:
> Does anyone else have their P4 and newer Xeon machines show up as
> "Unspecified Type" on the Individual Account Report page? Is this a
> common issue, or do I have something flaky in my local.ini?

Yes, there's something odd -

I have two P4s contributing to PrimeNet. One shows up as a PIII, the other as 
"unspecified".

Another "unspecified" system I have is actually a Sun Sparc running Glucas, 
using PrimeNet's manual testing pages. I've no idea at all as to where the 
speed "1401" which shows up in my personal account report against this system 
came from.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Request for help (especially ECM)

2002-11-12 Thread Brian J. Beesley
One more, this one is a much larger exponent.

The factor 17304916353938823097 of M111409 is found with 
sigma=8866098559252914, in stage 2 with B1 >= 4861 & B2 >= 343351.

I didn't bother finding the critical limit for finding the factor in stage 1 
as it would have taken a considerable amount of computation time.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Update on my stability issues

2002-11-11 Thread Brian J. Beesley
On Monday 11 November 2002 22:28,  Gareth Randall <[EMAIL PROTECTED]> 
wrote:
>
> The front air intake vents on almost every PC case I have ever seen have
> been virtually *useless*. For some reason manufacturers continue to drill a
> few pathetically small holes in the steel sheet and call that an air duct.
> People then put case fans against these and try to suck against what is 90%
> metal sheet and wonder why not much happens.

Hmm. I know exactly what you mean, but they're not _all_ like that. Look at 
e.g. the Coolermaster range.
>
> What you should do is to take a saw or a drill and cut the whole circular
> section out! You may end up with sharp edges, and you need to take
> precautions against metal shavings. You may also need to drill some neat
> holes (e.g. 10mm diameter) in the plastic front bezel in order to provide
> an unimpeded air path.

Do the job _before_ you install components (or remove everything before 
commencing surgery). File down the inevitable sharp edges, finishing with 
fine emery cloth, or wet-and-dry paper. Thoroughly remove swarf right down to 
metal powder - personally I think it's best to wash out with a high-pressure 
hose, then allow to dry. 

At least think about installing a grille to protect fingers & the inevitable 
stray bits of cable from rotating fan blades.

It also helps - long term - to place a piece of open-cell foam, or a layer of 
filter material intended for use in a vacuum cleaner, in the inlet(s). The 
idea is that much of the dust drawn in will stick in this - where it can be 
got at easily - rather than end up getting stuck in heat sink fins etc. where 
it will have a serious effect. Suitable material can be cut with scissors & 
held in place by a grille.
>
> Nevertheless, when the procedure is done you should be able to hold a match
> in front of a machine with nothing other than the PSU fan and see the flame
> visibly sucked into the case.
>
> If your airflow can't do that, then any internal fans you deploy are going
> to be pulling on a vacuum (or rather, reduced pressure). If it can do that,
> then you are in a *much* better position to keep your number cruncher cool
> and reliable!

Some other points here:

(1) fans working in a badly restricted air flow are probably going to be 
noisier than they should be, as they will be running with stalled airflow 
over the blades, causing excessive turbulence. When the airflow is reasonably 
free, the air flow over the blades will be much less turbulent.

(2) It seems to make sense that the fans in the case (including those in the 
PSU) should be arranged so that roughly equal volumes are blown in and sucked 
out. Otherwise one or other of the fans will very likely be running with 
stalled fan blades. Also, obviously, it helps if the path between inlet and 
outlet passes things you want to cool down - there's not much point in having 
inlet air blown straight back out before it has a chance to warm up.

(3) Other things being equal, large fans are quieter than small ones with the 
same air flow, since the rotation speed is lower and there tends to be less 
turbulence. Two low-output fans are preferable to one high-output fan for the 
same reason. 

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Request for help (especially ECM)

2002-11-10 Thread Brian J. Beesley
On Sunday 10 November 2002 20:03, [EMAIL PROTECTED] wrote:
> "Brian J. Beesley" <[EMAIL PROTECTED]> wrote
> - Here's one example:
> -
> - With sigma=1459848859275459, Prime95 v22.12 finds the factor
> - 777288435261989969 of M1123:
> -
> - in stage 1 with B1 >= 535489
> - in stage 2 with B1 >= 38917 & B2 >= 534241

And here are some more

With sigma=7324432520873427, the factor 649412561933038085071 of M1621 is 
found
- in stage 1 with B1 >= 2546119
- in stage 2 with B1 >= 94727, B2 >= 2543311

With sigma=5643809308022499, the factor 838124596866091911697 of M1787 is 
found
- in stage 1 with B1 >= 378041
- in stage 2 with B1 >= 35543, B2 >= 378001

With sigma=6305161669623833, the factor 597702354293770769 of M1867 is found
- in stage 1 with B1 >= 258983
- in stage 2 with B1 >= 3061, B2 >= 258301

With sigma=5956836392428930, the factor 8142767081771726171 of P721 is found
- in stage 1 with B1 >= 54779
- in stage 2 with B1 >= 33487, B2 >= 54390
>
>How does one map sigma to a curve (and initial point)?
> What is the range of sigma (it seems to go beyond 2^32)?

At worst, RTFS. My reason for finding the critical B values was in the hope 
that someone with a better understanding of the algorithms and/or an 
independent ECM factoring program would be able to confirm that these values 
make sense.
>
>The ECM tests should include a case where two primes are found at
> the same time during step 2, because the largest primes dividing the
> two group orders are equal.  [That is, the GCD will be composite.]
> This test may be hard to construct, however.

Actually, it's very easy.

The way I constructed the results submitted so far was to remove the known 
"target" factor from the low[mp].txt file & run a few curves with B1=10^5, B2 
automatic to find a sigma that "works". Not neccessarily the sigma that gives 
the lowest limits.

After receiving this message, I removed _all_ the known factors for P721. 
This was interesting, and indicates a bug, though it appears to be not very 
important:

with same sigma & minimum B1 & B2 noted above, the composite factor 129 
(= 3 * 43) was found. I would have expected (at least) 
3*43*8142767081771726171

Placing the known factors 3 & 43 back into lowp.txt and repeating the same 
curve yielded the expected factor 8142767081771726171.
>
>   Either the ECM tests or the p-1 tests should include a case where
> the group order (or p-1) is divisible by a power of a moderate prime,
> such as 61^3 or 757^2 .

There are lots of known examples for P-1.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Request for help (especially ECM)

2002-11-10 Thread Brian J. Beesley
On Saturday 09 November 2002 04:45, George Woltman wrote:
> A harder problem is finding some smooth ECM curves to test.  I do not
> have tools to compute group orders.  If someone can help by finding a
> couple of dozen smooth ECM test cases for exponents between 1000
> and 50, I would be most grateful.

Here's one example:

With sigma=1459848859275459, Prime95 v22.12 finds the factor 
777288435261989969 of M1123:

in stage 1 with B1 >= 535489
in stage 2 with B1 >= 38917 & B2 >= 534241

I'm not entirely sure why the B2 required to find the factor at the end of 
stage 2 is smaller than the B1 required to find it in stage 1. One of the 
improvements I guess.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Request for help (especially ECM)

2002-11-10 Thread Brian J. Beesley
On Saturday 09 November 2002 04:45, you wrote:
>
> A harder problem is finding some smooth ECM curves to test.  I do not
> have tools to compute group orders.

Nor do I.

> If someone can help by finding a
> couple of dozen smooth ECM test cases for exponents between 1000
> and 50, I would be most grateful.
>
If you take some examples of known factors around 2^64 in size (say 19 or 20 
digits), you would _expect_ to be able to find some of these factors by 
running a few score "random" ECM curves with B1=100,000 & automatic B2.
With luck some might even drop out in stage 1.

These should yield your test cases - obviously what you need is a specific 
"sigma" which should yield a factor with given limits, and a specific "sigma" 
which shouldn't  - and the second is all to easy to find!

Once we have some examples, a bit of experimenting with limits should furnish 
the critical limits for these particular examples. I believe that calculating 
the group order is rather time-consuming, even with the appropriate tools, so 
with reasonably small factors this experimental approach might not be too 
wasteful.

I wouldn't worry too much about covering the whole range of exponent sizes, 
or B limits - though those of us who have been running ECM can furnish 
working sigma values for specific factors we have found with large limits. 
The point with exponent sizes is that the code for the various FFT run 
lengths should already be "tested" through the "short LL residual" data set.
The problems that we want to test out are those specific to the ECM stage 1 & 
stage 2 algorithms (and the GCD, though that should be covered with P-1).

We should also be finding working examples for ECM on 2^n+1.

Does this sound reasonable? If so I could pick a few suitable factors & start 
trying to find sigma & B limits for test curves using ECM in v22.12.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Bug in version 22.10

2002-11-06 Thread Brian J. Beesley
On Tuesday 05 November 2002 21:40, George Woltman wrote:
>
> I'd actually recommend not doing the P-1 again.  If you are using enough
> memory to run both P-1 stages, then the bug did not affect stage 1 but did
> affect stage 2.
>
> If you run only stage 1 of P-1, then the bug would cause no factors to be
> found.
> In this case, you might consider re-running P-1 again as Brian suggests
> (but only if you used 22.10 dated after Sept 28).
>
> >  I did this on my own system & have been rewarded with _two_ factors
> > found -
>
> This is way above expectations!   Nevertheless, I'm always happy to get the
> factors.

Sure, I was surprised too. Anyway this is what I got:

{system 1}
[Wed Oct 30 21:00:54 2002]
UID: beejaybee/slug1, M8589491 completed P-1, B1=4, B2=42, WY2: 
DE3C48D7
...
[Tue Nov  5 10:15:49 2002]
P-1 found a factor in stage #1, B1=4.
UID: beejaybee/slug1, M8589491 has a factor: 42333443925749970809

{system 2}
[Thu Oct 31 23:12:50 2002]
UID: beejaybee/caterpillar, M8564431 completed P-1, B1=45000, B2=618750, WY2: 
DDB16BFB
[...]
[Tue Nov  5 12:44:02 2002]
P-1 found a factor in stage #2, B1=45000, B2=618750.
UID: beejaybee/caterpillar, M8564431 has a factor: 9592239270614293063

There's no particular reason to suspect that a glitch on either system caused 
a factor to be missed on the first run. I did have a couple of problems with 
cabbage in the summer which turned out to be due to a UPS overload, but 
that's fixed now.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Bug in version 22.10

2002-11-05 Thread Brian J. Beesley
Hi,

One thing you might consider - when you change to v22.11, check out your 
results file. If you have a P-1 run logged on an exponent you haven't yet 
started LL/DC testing, make it run the P-1 again (change the ,1 at the end of 
the assignment line in worktodo.ini to ,0). If you are already running the LL 
or DC test, things are not quite as straightforward; I'd reccomend:

For DC tests, if less than half done, force P-1 to rerun (as above) else just 
let it run on.

For LL tests, if less than 1/3 done, force P-1 to rerun (as above). If 
between 1/3 and 2/3 done, change the test type from Test to DoubleCheck then 
force P-1 to rerun. (This makes the P-1 limits smaller - with a lower chance 
of finding a factor, but using less time). If more than 2/3 done, just let it 
run on.

I did this on my own system & have been rewarded with _two_ factors found - 
one on an unstarted DC assignment, one on a DC assignment which was about 40% 
of the way through - there are still a few repeat P-1s running, too.

Regards
Brian Beesley
...
On Tuesday 05 November 2002 05:19, George Woltman wrote:
> Hi all,
>
> Sigh   If you downloaded a version 22.10 dated September 28 or later,
> then please upgrade to version 22.11 at
> http://www.mersenne.org/freesoft.htm
>
> The bug causes factors to be missed in the final stage of P-1 and ECM
> factoring.  While this isn't the end of the world, it will cause you to run
> an unnecessary LL test.
>
> Sorry for the trouble,
> George
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Modularising Prime95/mprime - a path to broader development.

2002-10-31 Thread Brian J. Beesley
On Wednesday 30 October 2002 23:08, Gareth Randall wrote:
>
> Could you please expand upon how this secure certificate concept would
> work, for the benefit of myself and the list? Unless there is more to it
> than I currently comprehend, this only authenticates results as coming from
> specific users, rather than authenticating that the result is correct and
> genuine.

Your analysis is correct.
>
> For instance, how can a new user who has had no previous contact with GIMPS
> prove that they have completed a Lucas-Lehmer test correctly?

If we were able to do this with 100% certainty, then we would not need to run 
double-checks!

Don't forget that a small fraction of the results submitted in perfect good 
faith by people who are making no attempt whatsoever to cheat will be 
incorrect by reason of a hardware or software glitch.

In the final analysis, the best deterrent to anyone who is deliberately 
submitting "concocted" results is the knowledge that they will (eventually) 
be caught out through the double-checking mechanism.

One way of tightening the procedure would be for interim residues to be 
logged as well as the final residue. As I've stated in the past, this would 
also enable a saving in effort by allowing investigation when a double-check 
run disagrees rather than having to continue the run through to the end.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Torture test allocating hundreds of MB?

2002-10-30 Thread Brian J. Beesley
On Wednesday 30 October 2002 02:34,  Nathan Russell wrote:
>
> Thanks to everyone who responded.  In this case, it's a bug in my
> thinking.  I had the memory usage set to the max allowable, because I
> wanted P-1 to succeed whenever possible, even if it inconvenienced me
> - I do most of my academic work via VNC into timeshares, so it isn't a
> big deal if the system thrashes.
>
> Is that the Wrong Thing (tm) to do in terms of improving Prime95's
> efficiency?

No, in principle its the Right Thing (tm). In practise you want to set the 
memory parameters big enough that that all available memory is in use, but 
just small enough that swap thrashing doesn't occur. Because swap thrashing 
is _so_ limiting to memory access speed, it's best to be on the cautious side 
- especially since you may occasionally want to do some "real work" on the 
system.

My current practise is as follows:

On systems with 128 MB memory, or less, set max memory to half the memory 
size. (Obviously this is not sane on systems with very small memory ...)
On systems with more than 128 MB memory, set max memory to memory size less 
80 MB. Except for one system with 512 MB which regularly processes very large 
files, so I limit mprime to 128 MB to avoid any possibility of the system 
having to sort files hundreds of megabytes in size in only ~ 40 MBytes real 
memory. Remember the OS kernel consumes memory too!

>From the point of view of "torture testing" a system, again the sane thing 
is to test as much memory as possible. Causing swap thrashing by setting the 
memory allocation set a bit too high may be a good way of testing the disk 
I/O subsystem; however the fact that the processor will be idle waiting for 
data for a lot of the time may (depending on the CPU, power economy settings 
in the BIOS etc) allow the CPU to run cooler than it normally would. So, if 
you're testing out CPU cooling rather than memory problems, setting memory 
allocation very small (i.e. 8 MB) for the duration of the torture test is 
probably wise.

If you really suspect you have a memory subsystem problem, ideally it's best 
to use a specialist program like memtest86 which runs without an operating 
system. You simply can't guarantee to test all memory properly with any 
program running on a multitasking operating system (unless the basic 
capability is built into the kernel itself!)

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Modularising Prime95/mprime - a path to broader development.

2002-10-29 Thread Brian J. Beesley
On Tuesday 29 October 2002 17:28, Gareth Randall wrote:
>
> I'd like to suggest that prime95/mprime be "modularised", and that only the
> core calculation component be kept closed source.

Umm - actually the core calculation component is "open source" (but subject 
to restrictive licence). See source.zip.

> I realise that the code
> for generating verification codes must remain restricted, 

No - there is an alternative, which is for results submitted to be 
accompanied by a secure certificate generated by the server. 

> because that is
> the only authentication that work has really been done and done correctly.

There are a couple of points here: (1) the verification code may be 
crackable; (2) there may be ways of persuading the program to submit results 
without actually executing all the iterations required. If every user had a 
(free) secure certificate, all results submitted would be traceable to the 
individual user. This scheme would also make it possible for other clients to 
use the automatic server interface, instead of having to rely on the manual 
forms (& not getting PrimeNet credit for work completed).
>
> However, I do not see any reason why any of the building blocks other than
> the core calculation component actually need to be restricted. I also see
> many benefits of them being made open to contribution.

Sure... and this will become increasingly important if (when!) the mass PC 
market starts to diversify from the current universal IA32 architecture.
>
> [non-contentious material snipped]
>
> The computation module would be simplified, 

I doubt it!
>
>
> The key benefits are:
>
> 1. Removal of many bottlenecks caused by the understandibly limited time of
> core developers.
>
> 2. Substantially easier bug-fixing. (What's error 2250 again? Quick search
> of the source for the server comms module. Oh yes, that means...)
>
> 3. Vastly increased potential for user participation and development.
>
4. (Important) Ease of implementation on other platforms. Only the core 
computation stuff really needs to be optimized to all hell. The server comms 
& general control stuff would probably be more than efficient enough if it 
was implemented in perl, java or something else inefficient but portable.

5. (Important, for those who like eye candy) Ability to add customized 
"skins".
>
>
> Prime95/mprime could be regenerated as a collection of programs, such as:
>
> On Windows: One executable and multiple DLLs
> or: Multiple executables, and one calculation component in Win32
> command-line mode.

The benefit here is a (small) saving in memory by not having to load unused 
code. Multiple DLLs are probably the best way to achieve this saving. The 
downside of this approach, from the developer's point of view, is that you 
get shafted if/when M$ change an API without telling you.

> On UNIX: Separate binaries and scripts. One script to
> start the collection running.

Better still: build a version with what you want compiled in / loaded as 
modules at run time / omitted (like the linux kernel - though obviously much 
simpler!), monolithic or as a suite of smaller "simple" programs, depending 
on your personal taste.
>
> Wouldn't it be great if from some point in the near future, "feature
> request" posts to this newsgroup became more like "I've written a fancy
> improved frontend to GIMPS with some cool graphics, see this link for more
> info", 

Actually that happened Sunday!
>
> Further, the ability for users to run their own personalised frontends
> might give GIMPS the tangible advantage over other distributed projects
> that many readers would so like to find.

Agreed.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Dissed again

2002-10-23 Thread Brian J. Beesley
On Tuesday 22 October 2002 16:31, you wrote:
> Yeah, well, we don't have a super cool Trojan horse program that can
> update itself (and crash machines) like these other ones, and we're not
> out there looking for ET or saving cancer boy or anything... just a
> bunch of geeks looking for big numbers. :)  (tongue planted firmly in
> cheek here).

And we tend to run in the background, all the time, instead of wasting cycles 
waiting for a screen saver to kick in, then wasting even more cycles drawing 
"pretty" graphics :-P

Probably we would get more participants if we had a screen saver version. 
This has been mentioned many times before.

And, _are_ we just looking for "big numbers"? There are software applications 
for improved algorithms & implementations of algorithms developed for this 
project; there are engineering spinoffs - a couple of years ago, the problem 
was how to keep GHz+ CPUs cool enough to be reliable, now the problem is how 
to make systems quiet enough to live with as well; there are cryptological 
spinoffs, not withstanding the obvious point that knowledge of a few very 
large primes is not in itself useful ... for instance, has anyone considered 
using the sequence of residuals from a L-L test as a practical one-time pad? 
The problem with one-time pads is distributing the data - but you can 
effectively transmit a long sequence of residuals by specifying only the 
exponent and the start iteration, which can be transmitted securely using 
only a tiny fraction of your old one-time pad data ... 

OK, this is pretty geekish stuff, but so what?

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: On v18 factoring

2002-10-23 Thread Brian J. Beesley
On Wednesday 23 October 2002 07:26, Nathan Russell wrote:

> >Other people have mentioned the possibility of "automatically" disengaging
> > or updating the client.
>
> I am aware of several linux distributions which do the exact same
> thing (in fact I am not aware of any widely popular one which
> doesn't).

Eh? I'm not aware of any major OS which even attempts to automatically 
install upgrades, with the exception of Win 2000 (if you applied SP3 and 
forgot to disable automatic updating) and Win XP (if you applied SP1 and 
forgot to disable automatic updating). 

The problem here is one of _control_. If you allow someone else - whatever 
their intentions are - to install & run software on your system without your 
explicit permission on a case-by-case basis, you are effectively handing over 
full control of your system & all the data on it to someone else. 
>
> However, they require the user to initiate the update.

Ah, I see what you mean.

> Would you be
> more comfortable if that was done, as well as some sort of signature
> on the update files?

Here's the difference: when I'm updating my (Red Hat) linux systems, _I_ 
wrote the script that downloads the update files (from a local mirror of my 
choice) & checks the certificates. Only then do I trust someone else's 
software to unpack and apply the updates.

I'd far rather run an unpatched, insecure service than depend on something 
that is in principle uncheckable to download & install software 
automatically. The problem is that, if the connection can be hacked in to, an 
attacker can supply anything they like  

Better still if I just download the source code & compile it myself. That way 
I am absolutely sure that what I use is what I think I'm using. Obviously 
this principle can apply only to programs which are 100% open source.

But here's the crunch: this discussion is related to the current problems 
with seriously obsolete clients. By definition these do not contain 
auto-update code, so the discussion is (+/-) pointless. 

To fix the problems, we really need to take a "belt and braces" approach:

(1) the server needs to protect itself from "machine gun" requests. I reckon 
the best way to do this is for the server to detect continuous repeat 
requests & automatically command its firewall to block data from that source 
address for a limited time (say one hour). This would protect the server from 
excess load, yet is not exploitable by remote attackers - all they can do is 
temporarily block themselves out! 

Although not neccessary to the project, I'd reccomend that the blocking 
action be logged so that it can be followed up (manually or automatically) by 
contacting the user concerned. Actual contact may sometimes not be possible 
because the registered user no longer controls the system.

(2) future clients should be modified so that, if PrimeNet has no suitable 
work to allocate, they back off for a few hours before trying again. Even if 
this means running out of work altogether - though, given the "days of work" 
parameter, they should run "in need of more work" for some time before 
finishing the current assignment.

In addition the server probably would benefit from addition of "intelligence" 
so that it does not attempt to assign work which specific versions of the 
client cannot accept. However, the action I suggest under (1) alone is 
sufficient; no automatic or forced upgrade is _required_.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: On v18 factoring

2002-10-22 Thread Brian J. Beesley
On Tuesday 22 October 2002 19:09, Gordon Bower wrote:
> [... snip ...]
> Does anyone have any suggestions for how to stop a runaway copy of
> v18? Perhaps in a few weeks the server can be updated to return an "out of
> exponents" error to v18 instead of offering it an assignment it can't
> handle?

This is not trivial - if you do this then a "broken" client will probably 
request another assignment immediately - thus trapping the client and the 
server into a vicious circle. Whilst the client can go hang for all anyone 
else cares, the effects on the server would probably be much the same as 
handing out an "unacceptable" assignment.

Other people have mentioned the possibility of "automatically" disengaging or 
updating the client. I have very serious reservations about this; the problem 
is that it leaves the system hosting the client wide open to use of the 
mechanism for malicious purposes, e.g. "updating" to a client containing a 
trojan or switching it to a different project, or attacking a user by 
disengaging his systems so that you can leapfrog him in the league tables. 

I'm afraid that I would have to withdraw my systems from the project, and 
recommend that other people did the same, if any such capability was added to 
the client.

Given that the server can tell the difference between a v18 client and a 
later one, would it not make most sense to have the server assign a LL test 
on the _highest_ unallocated exponent which v18 can handle if a v18 client 
asks for a factoring assignment and none suitable are available. This action 
would effectively remove the client from the loop for a while (probably a few 
months, given that most v18 clients will be running on slowish systems), 
thereby alleviating the load on the server, and buying time to contact the 
system administrator - when this is still relevant, of course. And some 
useful work may still be completed, eventually!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: On v18 factoring

2002-10-22 Thread Brian J. Beesley
On Tuesday 22 October 2002 21:00, you wrote:

> Suffice to say that the machine I used to use when working at a *totally
> different* telecom (not US WEST, oddly) had Prime95 running happily on
> it.  When I left, I didn't get a chance to wipe the machine, so every
> once in a blue moon I see it check in a result.  My mistake, for
> assuming this company wiped and reloaded machines that were reassigned
> to someone. 

IMHO (and I do have some clout on this, as I work in the computer security 
field) this is NOT your fault. If your previous employer reassigns the system 
you used to someone else, it's either the employer's or the recipient's 
responsibility to wipe & reload the system. (Depending on corporate policy).
This is a SERIOUS CONCERN; otherwise a disgruntled employee could get either 
the company or his replacement into serious trouble by deliberately leaving 
"illegal data" (child porn, pirated software or whatever) on the system, 
waiting a while then informing the authorities. 

> It's a lowly Pentium 180, but I had checked it to do LL
> tests regardless of server preference.  Meaning that nowadays, it's
> taking nearly a year to complete one.

Big deal, it was still contributing useful results! But perhaps you should 
have changed the work type to "whatever makes more sense" 0.1 microseconds 
before you left/were ejected from the building.
>
> I haven't actually seen it in a while, maybe 6 months or more, so maybe
> they finally retired it (a P180 running NT4 with about 128MB of RAM).
> It was just odd... 2-3 years after I last saw that machine, and then to
> see it report in every 6 months or so.
>
> The odd part was, the machine must not get used all that much because I
> thought I had it set to check in every week or so, but it was months
> between check-ins.  In that time, the exponent would expire, but then
> the machine would come up and start working on it again... meaning
> someone else had probably got the assignment and may have even finished
> it for all I know.

I have a number of old, slow systems which are used intermittently for 
testing purposes. I can't leave them all on all the time because the room 
they're in lacks adequate cooling. Perhaps something similar was going on. 
Unfortunately this activity pattern does tend to break the PrimeNet server 
checkin protocol, resulting in work getting reassigned. Still, if you're 
running LL tests, this probably doesn't matter - if the assignment ever 
completes, you'd get PrimeNet credit for a double check, and save someone 
else the effort of running the DC assignment.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Composite factors.

2002-09-24 Thread Brian J. Beesley

On Tuesday 24 September 2002 06:05, Daran wrote:
> P-1, like any other GCD-based factorisation method, will yield a composite
> result in the event that there are two (or more) prime factors within its
> search space.  It seems unlikely that this would happen in practice because
> unless both were > ~ 64 bits, one of them would most likely have been found
> earlier during TF.  However given that some factors found have been > 130
> bits, 

TTBOMK only using ECM - and those events are rare enough to be newsworthy. 
I don't think P-1 has found a "proper" factor exceeding 110 bits, yet.

> then the possibility is there.
>
> I was wondering if returned factors are checked for primality.

I've found a few composite factors whilst running P-1 on small exponents. 
They've all factorised _very_ easily (well within one minute) using "standard 
tools". Basically I'm interested enough to do this myself whenever what 
appears to be an abnormally large factor is found, but it wouldn't be hard to 
automate. Also very large factors are found at a low enough rate that there's 
simply no need to distribute the checking.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Hyper-threading

2002-09-21 Thread Brian J. Beesley

On Saturday 21 September 2002 21:20, Daran wrote:
> Could this feature of forthcoming Intel processors be used to do trial
> factorisation without adversely impacting upon a simultaneous LL?  Could
> this be easily implemented?

1) _Existing_ Pentium 4 Xeons have hyperthreading capability.

2) Implementation is easy; just run two processes - one LL & one TF - 
assigning one to each virtual processor. In fact there's no other way to 
implement: you can't have one process running in multiple virtual processors 
simultaneously with hyperthreading technology alone.

3) I reckon there would be a very significant performance hit. Temporary 
registers, instruction decoders etc. are shared so any pressure whatsoever on 
the "critical path" would cause a performance drop - even if the code in the 
two processes could be guaranteed to stay phase locked so that there was no 
simultaneous call on a particular execution unit. (In practise I think 
unregulated phase drifts would result in a phase locked clash, since this 
appears to be the most stable state).

You would probably get 20-30% more _total_ throughput this way than you would 
be running LL & DC assignments in series, i.e. the LL test speed would be at 
best 2/3 of what it would be without TF running in parallel on the same CPU.

One benefit of hyperthreading technology for compute-bound processes in an 
interactive environment - provided you're running only one compute-bound 
process per _physical_ processor - is that the extra capacity helps the 
system to react more quickly to interactive loads, so it's a lot less likely 
that foreground users will notice the background CPU load.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: TF - an easy way to cheat

2002-09-21 Thread Brian J. Beesley

On Saturday 21 September 2002 16:15, Daran wrote:
>
> > ... through 64 bits the algorithm runs much faster than it does for 65
>
> bits
>
> > and above. The factor is around 1.6 rather than 2.
>
> Good point, and one which I didn't consider in my reply.  But the ratio
> must be different for the P4, which uses SSE2 code for factorisation over
> 64 bits.

Pass, I haven't let my one & only P4 system do any TF!
>
>
> According to undoc.txt:- "You can limit how far the program tries to factor
> a number.  This feature should not be used with the Primenet server.",
> which implies that something bad will happen if you do.

Umm. I guess there is a possibility that PrimeNet might get confused about 
whether an assignment is completed (as far as the system running it is going 
to go).
>
> > Suggestion: the TF savefile should be modified to contain an internal
> > consistency check (say the MD5 checksum of the decimal expansion of the
> > current factoring position) so that cheating by editing the savefile,
>
> causing
>
> > "jumping" past a large range of possible factors, would be made a great
>
> deal
>
> > more difficult.
>
> Easily cracked.  Why not just encrypt it?

True. But the problem with encryption is that it has to be decrypted - if the 
client knows how to do that (& it has to, if the save file is to be any use 
at all) then a cheat can dig out the code somehow.

One thing that could be done is to write an extra "hidden" save file (or 
store a registry key) containing some value computed from the save file 
contents, so that if the save file was manually changed there would be a 
clash & the client would know something odd had happened.

Another trick (which I actually used in a now-obsolete anti-tamper system) is 
to take the time the file is opened for writing (in unix seconds) & xor that 
into part of the file data. When checking the file, read the file creation 
date from the directory, xor into the data, if the file checksum fails 
increment the creation date & try again - at most 10 times, since the file 
creation date (stored in the directory) shouldn't disagree with the system 
clock read by your program just before it called the file open routine by 
very much. This works pretty well because few people will twig what you're 
doing (even if you document it!) & even fewer will manage to frig the 
checksum properly.

The idea here is to make "successful cheating" not worth the effort. There's 
no way it's going to be possible to stamp it out altogether.

Personally I'm more concerned about the possibility of cheating your way up 
the PrimeNet LL testing league table. The obvious way to do this is to start 
a test, stop manually after a minute or two, edit the iteration number to a 
few less than the exponent & restart ... lo & behold, you _could_ "complete" 
LL tests on _any_ exponent in about five minutes, even on a P90 system. The 
fact that the residuals would all be wrong is irrelevant since PrimeNet 
doesn't penalise bad results; in any case, you'd probably get away with it 
until DC started to catch up with you. An internal consistency check would 
make this a lot harder to do. MD5 is pretty good (though not perfect) as 
there is no "simple" way to fudge it; the compute effort involved in 
frigging a file to achieve a specified MD5 sum is several orders of magnitude 
greater than that required to LL test a 10 million digit exponent, so it's 
simply not worth trying.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: TF - an easy way to cheat

2002-09-21 Thread Brian J. Beesley

On Friday 20 September 2002 22:42, Torben Schlüntz wrote:
> Anyone receiving a TF task could edit the worktodo.ini from
> Factor=20.abc.def,59
> to
> Factor=20.abc.def,65
> He would receive approx. twice the credit the effort is worth.

Not quite - even allowing for the 1/2^6 effort involved in TF through 59 bits 
... through 64 bits the algorithm runs much faster than it does for 65 bits 
and above. The factor is around 1.6 rather than 2.

> Ofcourse nobody would do this, as we are all volunteers! Or could
> somebody some day be tempted to raise his rank using this method?

Never underestimate what some people may do to rig league tables!

> Does GIMPS hold some log for TF's done by which account? If so could
> this log please be open?

I think the log exists but, since intermediate checkpoints are not logged, it 
might not show anything.

> Would this cheat be trapped later by P-1 or does P-1 trust earlier work
> so factors below say 67-bits are not considered?

P-1 doesn't care what (if any) TF has been done previously. _Some_ but by no 
means all "missed" factors would be picked up bt P-1.

Note also that some factors may be missed due to genuine "glitches" as 
opposed to deliberate skipping.

> The above questions are _not_ asked because I intend to use the method.
>
> :-/ I think it would miscredit GIMPS as we trust the results of GIMPS.
>
> And I would be disappointed if I learned that an LL I did could have
> been solved far earlier - and using less effort.

Yes - but as TF is primarily designed to reduce LL testing effort, missed 
factors are an inefficiency rather than a serious problem.

Suggestion: TF should report completion of each "bit" to PrimeNet, not just 
the on completion to the target depth. I don't see how this would require 
changes to the server, though there would be a (relatively small) increase in 
load.

Suggestion: the TF savefile should be modified to contain an internal 
consistency check (say the MD5 checksum of the decimal expansion of the 
current factoring position) so that cheating by editing the savefile, causing 
"jumping" past a large range of possible factors, would be made a great deal 
more difficult.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: New First Time Tests!

2002-09-18 Thread Brian J. Beesley

On Monday 16 September 2002 22:18, George Woltman wrote:
> I'm releasing about 3000 exponents from 10,000,000 to 15,000,000 for
> first-time testing!  These have been tested once already, but the first run
> had one or more error.
>
> As we saw in another thread, this means the first test has less than a 50%
> chance of being correct.  Retesting these exponents now rather than waiting
> for double-checking to get this high makes sense to me.

Yes ... in fact it makes sense for LLtests with errors to be automatically 
recycled (but only the first time), since the "worst" that will happen is 
that there will be an early verification.

If there are two results with the same residual this is very, very probably 
correct, even if both runs had errors - this may well happen, at least with 
some versions of Prime95/mprime, when the exponent is very close to a FFT run 
length crossover.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Order of TF and P-1

2002-09-12 Thread Brian J. Beesley

On Wednesday 11 September 2002 13:43, Steve Harris wrote:
> I don't think the TF limits were ever lowered; 

I haven't checked the source from the latest version but the TF limits should 
surely be linked in some way to the LL/DC FFT run length crossovers. Many of 
these _have_ been lowered. Slightly, and especially for P4 systems.

> it seems they may have been
> raised, as I have gotten several 8.3M DC exponents which first had to be
> factored from 63 to 64 and THEN the P-1. 

Yeah, I've had a number of these, too.

I would have thought that, since (ignoring runs with errors - which is a 
reasonable first approximation) factoring before DC saves only one LL test, 
whereas factoring before first LL test saves two. So trial factoring (TF) 
depth for DC assignments should be one bit less than for LL assignments - 
ignoring efficiency changes due to hardware (word length) constraints, the 
last bit takes half the total TF time.

> It occurred to me that it might be
> more efficient to do it the other way around, but factoring from 63 to 64
> goes relative quickly. If it were a question of factoring from 65 to 66
> versus P-1 first, then I think the P-1 wins easily.

Again I haven't checked against the new code - TF on P4s is supposed to have 
been speeded up considerably - but it used to be the case that, assuming 
trial factoring runs at the same speed irrespective of depth (which is again 
a reasonable assumption since most TF through 2^64 is completed) it was most 
effective on a PIII or Athlon to run TF through to N-1, then P-1, then the 
last "bit" of TF. On P4s the best policy was to run P-1 after TF to N-2 bits. 
I guess the new P4 TF code will have brought it all into line.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: WinXP SP1 slows prime95

2002-09-10 Thread Brian J. Beesley

On Tuesday 10 September 2002 19:09, Jud McCranie wrote:
> Yesterday I went from Windows XP home to service pack 1.  The speed of
> prime95 went down by over 2%.  Has anyone else seen this?  Any ideas on
> what caused it or how it can be fixed?

No, I haven't seen this. I don't even have a copy of Win XP.

2% is the sort of change which can occur when a program is stopped & 
restarted without changing anything else. Probably the cause is a change in 
the page table mapping (of physical to virtual memory addresses). It's also 
common to find Prime95/mprime speeding up a little when an assignment 
finishes and the next one starts compared with the speed measured when the 
program is freshly started. This seems to happen on (at least) Win 95, Win 
98, Win NT4, Win 2000 and linux with both 2.2 and 2.4 kernels, with multiple 
versions of Prime95 & mprime.

>From what I've heard & read about XP SP1, I don't think there's anything in 
it which should affect Prime95 running speed to any significant degree. 
Personally I would not agree to the modified EULA which comes with SP1, as it 
appears to allow M$ to take complete administrative control of your system. 
However, that's irrelevant to the speed problem; in any case, not applying 
the critical patches contained in SP1 is in itself a security risk.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Database - truncated lucas_v file available

2002-09-03 Thread Brian J. Beesley

Hi,

Since George added offset & error information to the lucas_v database file, 
it's grown ... now around 7 megabytes, making it painful to download on an 
analogue modem link.

I've therefore created a "truncated" version of the file. This is the same 
file but with the information omitted for all exponents smaller than the 
"double check completed through" value on the status page. The new file is 
only about 40% of the size of the full version, but contains all the 
information of interest to people working in active PrimeNet ranges.

The file is available as follows:

http://lettuce.edsc.ulst.ac.uk/gimps/lucas_va.zip

or by anonymous ftp from the same host, look in directory 
/mirrors/www.mersenne.org/gimps

It's generated by a cron job run at 0630 GMT daily, if the lucas_v.zip file 
on the master database has been updated during the previous day.

BTW this file has been generated by an open-source compression program which 
is supposed to produce files 100% compatible with Windows Zip format. Please 
let me know if there are any problems with decompressing it.

The orginal file remains available; omit the last "a" in the URL given above.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Still unable to communicate with server.

2002-09-03 Thread Brian J. Beesley

On Friday 30 August 2002 20:59, I wrote:

> Are we losing users? Well, if users can't connect to the server, they're
> going to be discouraged. Ditto anyone still using Windows 95 - Prime95
> v22.3+ has problems since George apparently upgraded his development kit.

I'm please to report that Prime95 v22.8 runs without problems on my reference 
Win 95 system (original version + SP1 + assorted patches).

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Still unable to communicate with server.

2002-09-01 Thread Brian J. Beesley

On Friday 30 August 2002 21:29, you wrote:
>
> Well,  Win95 is getting increasingly uncommon (and for good reasons,
> stability and support for USB come to mind).

Well - there are still a lot of older systems around which run Win 95 quite 
happily (some of them are even reasonably stable!) but which it isn't worth 
paying M$ "upgrade tax" on. Some simply don't have the resources to run Win 
XP, and some of the owners can't be persuaded to switch to linux (perhaps 
because they're corporately owned).

Suggesting to people that their contribution isn't welcome because they're 
operating somewhere close to the trailing edge is another good way of 
discouraging participants.

As for USB - I'm responsible for twenty-odd systems with hardware USB ports; 
only two of them have ever had USB peripherals connected. I may be unusual in 
this respect, but I don't think USB support is absolutely essential. In any 
case, later OEM versions of Win 95 contain the same USB support as Win 98.

Yes, it is neat to be able to connect a printer to two systems simultaneously 
using both the parallel and USB ports!

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: More error info

2002-09-01 Thread Brian J. Beesley

On Sunday 01 September 2002 03:35, George Woltman wrote:
>
> Our intrepid researcher broke down the non-clean run stats below.  So if
> you get a single error, you've got a 2/3 chance of being OK.  Two or more
> errors and your chances are not good. 

There will be a major change in this area - since new versions run 
per-iteration roundoff error checking when close to FFT run length 
crossovers, there will be a fair number of _reliable_ results with multiple 
(automatically checked) roundoff errors.

Perhaps the analysis should distinguish between runs where all roundoff 
errors were found to be "false alarms", and runs where a roundoff error seems 
to have been a glitch.

Analysis of the seven results in the "bad" database submitted on my account:

3 due to a failed CPU fan (on an old P100 system, running in an 
air-conditioned computer lab. One with detected roundoff errors, two without)
2 due to software bug in a recent alpha release
1 due to hard disk problem - almost certainly bad swap file I/O caused memory 
corruption on a busy system (without detected error)
1 (also without detected error) cause unknown.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Still unable to communicate with server.

2002-08-30 Thread Brian J. Beesley

On Friday 30 August 2002 04:22, Sisyphus wrote:
> Hi,
> Recently started getting error 29 with Windows, PrimeNet version 21, so
> I've upgraded to version 22.8.
> Now I get error 2250 - so we're definitely making progress

2250 is a problem with the server being offline for some reason.
>
> :-)
>
> 'prime.ini' contains 'UseHTTP=1'. And I've tried the stuff relating to
> proxies/firewalls mentioned in the faq (though this was not an issue with
> version 21). Still can't get a connection.
>
> Where to, now ?

Hmm. Surely "UseHTTP" is now obsolete since there hasn't been RPC support for 
some time?

Are we losing users? Well, if users can't connect to the server, they're 
going to be discouraged. Ditto anyone still using Windows 95 - Prime95 v22.3+ 
has problems since George apparently upgraded his development kit.

Another good reason for changing to linux?

Anyone with this problem could also try Prime95 v22.1, which _does_ appear to 
work on Win 95 _and_ hasn't had the connection problems others have been 
reporting (though there have been occasions this week - at least 12 hours on 
one occasion - when the server was broken).

I find all this a bit hard to understand (and harder to wrestle with when I'm 
hopelessly overloaded with "work" - as usual at this time of year) since web 
comms on TCP port 80 is pretty standard these days; even firewall tunnelling 
is pretty much a "given". When it stopped working, which end was broken so 
far as standards compliance is concerned?

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 22.8.1 has increased iteration time

2002-08-30 Thread Brian J. Beesley

On Thursday 29 August 2002 13:30, Gary Edstrom wrote:
> I have noticed a small but definite increase in the iteration time of
> version 22.8.1 as opposed to 21.4.
>
> During the night, when my 2.2GHz Pentium IV system was free of all other
> processing activities, the iteration times were as follows:
>
> 21.4  47 msec
> 22.8.150 msec

Is the exponent very close to the run length threshold? If so you're now 
running with per-iteration roundoff checking. On my P4 1.8A (with PC800 
RDRAM) I found this made very little difference, but systems with more 
limited memory bandwidth may be more affected by this.
>
> I am continuing processing on the very same exponent using the new
> version.  Is this allowed?

It works, reliably, though sometimes the FFT run length may change at the 
time of the upgrade.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 266 vs 333 ddr on Athlon

2002-08-27 Thread Brian J. Beesley

On Tuesday 27 August 2002 02:08, Marc Honey wrote:
> Anyone else notice that a kt333 Athlon board using an Athlon XP gets better
> performance at 266 than at 333?  I was amazed at the difference, and yes I
> tweaked out the bios under both memory speeds.  AMD really needs a fsb
> speed update!

Weird. Possibly your 266 MHz DDRAM is CL2 but your 333 MHz DDRAM is CL3.

Also, I have two near-identical systems using 1.2GHz T'bird Athlons in Abit 
KT7A mobos, with CL2 PC133 memory. The only difference is that one of the 
CPUs is 200 MHz FSB the other is 266 MHz. Both are running the memory at 133 
MHz (the BIOS on the KT7A lets you do this). The system speeds are within 1% 
of each other.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Benchmarks / Reference Machine / Calculations

2002-08-21 Thread Brian J. Beesley

On Tuesday 20 August 2002 22:39, you wrote:
> Michael Vang highlights the fact that there are two different things that
> we can measure: 1) work accomplished, e.g. Mnumbers evaluated, iterations
> run, etc. 2) work effort expended, which requires evaluation of
> processor/system power.
>
> The P4 versions (more efficient) accomplish more with less effort. This can
> make evaluation of effort expended complex, even when work accomplished is
> trivial to calculate.

Also, PIII and Athlon are more efficient than PII because of the impact of 
the prefetch instruction...
>
> The only thing I have concluded so far is that any re-indexing or
> re-calculation should be concerned strictly with LL computation. No
> consideration should be given to factoring or P-1 effort in determining
> machine efficiency. After all, any factors found are a nice side effect.
> The _real_ objective is to find numbers _without_ factors.

Umm. I think the point here is that factoring & LL testing are _different_. 
You can't add apples & oranges directly; you really need seperate tables.
>
> The rankings should ideally be based on work effort expended, in my
> opinion. I have no idea how this can be done "fairly". If accomplishments
> are to be the basis of rankings, the individuals who have found MPrimes
> should always be at the top of the list for LL testing.

This is all old stuff & fairly uncontroversial. To recap: at present we have 
two sets of tables:

the PrimeNet tables are "effort expended" and seperated into LL testing & 
trial factoring components (P-1 is ignored)

George's tables count LL testing effort only; results subsequently found to 
be incorrect are discredited; so are results for exponents when a factor is 
subsequently found.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Benchmarks / Reference Machine / Calculations

2002-08-20 Thread Brian J. Beesley

On Tuesday 20 August 2002 08:57, Paul Leyland wrote:
> Anyone else here old enough to remember Meaningless Indicators of Processor
> Speeds?

Oh yes. My first boss used to rate CPUs in "Atlas power"
>
> All gigaflops are not created equal, unfortunately.  Wordlength alone can
> make a big difference.

Really we need only consider IEEE single (24+8) & double (53+11) precision 
types... the x87 80-bit format is not much different to double precision the 
way we use it, and I'm not aware of any common hardware implementations of 
other floating-point formats.
>
> > Or use bogomips... :)
>
> It's no worse than many suggestions, and better than some we've seen.

Surely bogomips is measured only in the integer arithmetic unit?
>
> Personally I vote for the status quo.  It's a well understood arbitrary
> unit and there are enough P90's around to be able to re-calibrate new
> algorithms as they come along.  If need be, I can un-overclock my P120 to
> convert it back into a P90 for benchmarking purposes.   I doubt very much
> that there aren't other P90 owners who could also provide a similar
> service.

There's a great deal to be said for that proposal.

But don't forget that the "official P90 benchmark" refers to a specific 
system, no longer in existence, operated by George Woltman; it seems to have 
been a rather good P90 system. Apart from the CPU, factors such as the L2 
cache size, memory timings, chipset type, BIOS parameters etc. etc. can make 
a significant difference to the speed of a system.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: The first 1000+ digit prime

2002-08-20 Thread Brian J. Beesley

On Tuesday 20 August 2002 16:32, Tony Forbes wrote:
> We all know that A. Hurwitz discovered the Mersenne primes 2^4253 - 1
> and 2^4423 - 1 in 1961.
>
> (i) Were these the first two 1000+ digit primes discovered?

Yes. See http://www.utm.edu/research/primes/notes/by_year.html#table2
>
> (ii) If that is true, then is it generally accepted that the larger one
> (4423) was discovered first? (The story I heard was that left the
> computer running overnight and when he came to look at the results he
> read the printer output backwards, thus seeing 4423 before 4253.)

Interesting. Is the "discovery" the point at which the computer finishes with 
zero residual (in which case 4423 was discovered first) or the point at which 
a human being becomes aware of the result of the computer run (in which case 
4423 was discovered first). (Unless the operator (remember those?) was 
reading the printout as it was being output?)

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Benchmarks / Reference Machine / Calculations

2002-08-19 Thread Brian J. Beesley

On Sunday 18 August 2002 17:59, Jeff Woods wrote:
> 21000 of the 31000 participating machines are P-III or better.
>
> Less than 2,000 true Pentium-class machines remain in the mix.
>
> George et. al.:  Could it be time to change the baseline reference machine
> away from the Pentium-90, and wipe the P-90 off of all pages, from rankings
> to status to years of computing time left to complete the task?
>
> A couple years back, George changed the Status page reference to be a
> P-II/400, equivalent to 5.5 P90's.   Now even that PII/400 is far less than
> the 'average participating machine", which given the above numbers, I'd
> guess is now about one gigahertz, perhaps slightly better.
>
> I believe that a one-time re-indexing of ranks, stats, and "time left to
> compute" that re-indexes on either a P-III/1000 or an Athlon-1000, would
> make the "CPU years left" numbers on the status page a bit more realistic,
> as well as the number of "CPU years" I complete each day.

If we're going to re-index at all then we should be jumping to the top of the 
range since this will be relevant for longer. How's about referencing to 
Pentium 4 2.67B which is about the top of the range at the moment (if it's 
even available yet).

I think we should also publish conversion factors for "common" processors 
including obsolete ones at least as far back as 386. There _is_ historical 
interest in this, even if working examples of these processors are now only 
to be found in space hardware. (Incidentally the first successful 
microprocessor-controlled space probes - the Voyagers, controlled by Intel 
8008 CPUs - are just coming up to the 25th anniversary of their launch!)

>
> 
>
> Side note:   Also of interest in both the benchmarks table and on the
> individual / top producers tables, would be a RECENT CPU hours/day
> comparison, as well as a machine reference back to the baseline machine,
> whatever it may be.
>
> i.e. I've been with this thing from the beginning, in 1996.   Obviously, my
> average machine has gotten better and better.   My top listing says I'm
> doing about 1090 CPU hours a day but that's averaged over ALL of my
> submissions, dating back to when I was using 486's in 1996!
>
> I did some arithmetic to try to figure out what I'm cranking out NOW
> (anyone want to check my logic here)?
>
> i.e. how many CPU-hours a day is, say, an Athlon 1600+ worth?
>
> According to the benchmarks page, the P-II/400 does a 15-17MM exponent
> iteration in 0.536 seconds.And we know that this machine is 5.5
> P-90's.  Thus, a P-90 would be expected to take 5.5 x 0.536, or 2.948
> seconds.
>
> My Athlon 1600+ takes .130 seconds per iteration.
>
> 2.948 / 0.130 = 22.677 times as fast at the P-90, so 22.677 x 24 hours
> means that this machine ought to be doing ABOUT 544.24 P-90 CPU hours per
> day.
>
> If I add up what all my machines are doing NOW, I get 3503 P-90 CPU Hours a
> day, not the 1090 shown on my account and report.
>
> --
>
> What I'd like to see is:
>
> 1) On the individual account report, the above calculation (i.e. the
> 544.24) shown next to the exponent/machine.  This should not be ESTIMATED,
> but reverse engineered from actual reported iterations per second for the
> exponent, compared to 2.948 seconds for the P90 (or whatever a new baseline
> might be).
>
> 2) A SUM of all of the above, to let one know how much they TRULY are
> cranking out, as opposed to that slow creeping average that, after so
> many years means nothing.
>
> 3) A "rolling average" for the last 6 months, for the Top XXX pages, so
> that I can compare RECENT work to other recent work.  i.e. I see that
> I am surrounded by many others in the 1100 CPU Hours/day rangebut
> if my historical data is skewed so much by those old slow machines from
> six years ago, how much are others skewed?  Who do I have a chance to
> pass?  Who's gaining on me?   I can't tell!   A rolling average, or
> perhaps the calculations from #2 above in a column instead of a rolling
> average, would make comparisons in the Top XXX listings easier, and
> much more meaningful.
>
This suggestion makes a lot of sense. The "hours per day" figure is pretty 
meaningless, for the reasons stated.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Benchmark Timings: XP1800+

2002-08-19 Thread Brian J. Beesley

On Sunday 18 August 2002 17:18, you wrote:
> What a difference RAM makes.
>
> I didn't find my specific configuration on the benchmarks page, so I am
> sending in this.
>
> CURRENTLY on the benchmarks page:
>
> Athlon XP1800+, 1533 Mhz, 133 DDR, L2=256-Full, 15-17M timing:  0.091 sec

The memory access speed matters a lot, too. Most PC2100 (133/266 MHz DDRAM) 
is CL2.5 but premium PC2100 memory can work at CL2. PC2700 (166/333 MHz 
DDRAM) usually works at CL2 when run at PC2100 speeds. The difference between 
DDRAM @ CL2 compared with CL2.5 is about 5%.

Your benchmark speed sounds about right for CL2 DDRAM; I'm getting 0.097 sec 
on a Athlon XP1700+ using CL2.5 PC2100 DDRAM with a very mild overclock 
(nominal voltages, FSB wound up to 136 MHz, i.e. CPU clock 1496 MHz).

Similarly PC133 (SDRAM) can be either CL2 or CL3, with CL3 around 8% slower.
>
> I have a virtually identical machine, except it is using mere 133 Mhz
> SDRAM.  I'm CERTAIN the CPU bus speed is correct, and that the L2 is fully
> enabled, yet for an exponent smack in the middle (in the 16 millions) I'm
> getting only 0.116 sec out of it, on a barren (i.e fresh install, no other
> apps/services running) W2K machine (v22.7 beta).
>
> That's a TWENTY TWO PERCENT performance hit, just for not using DDR!

Sounds a lot, I'd have expected around 15% ... did you check the other 
performance settings in the BIOS? These can make a significant difference...

(If you turn off the L2 cache the speed will drop DRAMATICALLY. Factors of 20 
or more are typical!)

One thing that _doesn't_ seem to make a whole lot of difference is the FSB 
speed. I have two Athlon T'bird 1.2GHz systems, both running in Abit KT7A 
mobos with the same RAM (CL2 SDRAM @ 133 MHz) and the same BIOS tuning. The 
difference is that one of the CPUs is 12x100 and the other is 9x133. There is 
less than 1% difference in speed between the two systems.

This may be relevant with the new Pentium 4 "A" (quad-pumped 100 MHz) & "B" 
(quad-pumped 133 MHz) variants - my guess is that a "A" variant coupled with 
533 MHz memory is going to outperform a "B" variant coupled with 400 MHz 
memory by some margin. Obviously a "B" with 533 MHz memory is going to be 
best. Note also that RDRAM outperforms DDRAM by a considerable margin. The 
problem is that 533 MHz (PC1066) RDRAM is hard to obtain and therefore 
expensive at present. However spending $100 extra on better memory is going 
to be a lot more effective in performance terms than spending $100 extra on 
the processor, once you're at the top end of the range.
>
> A word to the performance wonks out there -- use the BEST RAM that your
> motherboard can take advantage of.

And it gets more and more important as the clock speeds go up.
>
> One of these days I'll drop some DDR in there (sad to say, I don't have any
> spare right now), and see if it truly does drop down to 0.091 or
> thereabouts.

Note that it is not often possible to use SDRAM or DDRAM in the same board. 
SDRAM DIMMs are 168 pin, DDRAM DIMMs are 184 pin, so the modules are not 
physically interchangeable. Unless you have two sets of RAM slots, you aren't 
going to be able to convert without swapping the mobo. Even then I doubt you 
will be able to use both DDR and SDR at the same time, without crippling the 
DDR performance to SDR levels.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Two bugs in Prime95 v21.4.1

2002-07-23 Thread Brian J. Beesley

On Tuesday 23 July 2002 10:25, Paul Leyland wrote:
> George,
>
> I think I've found two bugs in Prime95 or, at least, serious
> misfeatures.  I don't know whether they've been fixed in more recent
> releases but as I'm using the program in a rather creative manner I
> suspect not.  The Mersenne list is Cc:ed so that if anyone else wishes
> to use your program in this way they will be aware of the problems and
> take appropriate countermeasures.
>
> Bug 1:  A factor specified in lowm.txt or lowp.txt which is more than a
> hundred or so decimal digits is not read correctly and is incorrectly
> reported as not dividing the M or P number being factored.  The exact
> length at which it fails wasn't determined directly but it's around that
> size.

Umm.

My guess would be that whatever buffer space is set aside for known factors 
(or the product of known factors) is insufficiently large, so something gets 
trampled on.
>
> Bug 2:  If worktodo.ini contains two lines specifying ECM work to be
> done, and a factor is found of the first number, the worktodo.ini file
> is truncated to zero size and Prime95 halts.  In my opinion it's a
> misfeature that the program doesn't append the new factor to
> low{m,p}.txt and continue with the remaining curves on that integer but
> I accept that may not be what everyone wants.  Wiping out all remaining
> lines *is* a bug in my opinion.

Sure. But it hasn't happened to me when I've found factors (up to 44 digits). 

I thought the "continue after finding factor" feature was set by default - 
doesn't one have to set ContinueECM=0 in prime.ini if one wants to abort 
after finding a factor? The exception being if the cofactor is determined to 
be a probable prime, in which case there is no point in continuing...?

Maybe fixing any "short buffer" problem would fix this problem too.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Roundoff errors

2002-07-22 Thread Brian J. Beesley

On Monday 22 July 2002 16:55, you wrote:
>
> Thank you and everyone else, both on- and off-list, for your helpful
> suggestions.  I took the cover off and had a look.  The HSF looked like the
> inside of an old vacuum cleaner, so I used a new one on it.  :-)  The fan
> speed is now back up to 4600, and the processor temperature has dropped by
> 10 degrees.
>
> While this is probably what tipped the system into instability, I'm not
> convinced it is the sole cause of the problem, if, as you say, 50 degrees
> is not excessive.

This does depend on whether the sensor is correctly installed in the mobo - 
socketed AMD Athlon/Duron processors have the sensor located in the "well" in 
the middle of the socket, rather than in the processor package. It should 
sort of stick up a bit so that the springiness of the mounting wires holds it 
in contact with the installed processor package base. If it is pushed down to 
the bottom of the well, out of contact with the processor package, it will 
underread badly - even if the reporting software maps the 
current/voltage/resistance measured by the sensor to temperature correctly. 
The temperatures & voltages reported in the BIOS Setup "health status" 
display should be correctly mapped but are, of course, impossibly 
inconvenient to use on a running system!

A small amount of dust buildup on the leading edges of fan blades is 
inevitable. The problems with buildup of excessive dust/fluff are (a) it gets 
stuck between the heatsink fins, reducing the effective area of the heatsink 
and restricting airflow - possibly to the point where fan rpm reduces, though 
this usually involves fan blade stalling with a marked increase in 
aerodynamic noise; (b) lumps of compacted fluff can be thrown from the fan 
blade, whereupon they stick to the inside of the duct and are clouted by each 
passing fan blade; this makes a terrible noise, and failure of the bearing 
due to repeated shock loading often follows shortly thereafter.

I prefer to mount cooling fans onto heatsinks so that they suck hot air from 
the heatsink rather than blow cool air onto it. This does result in a small 
decrease in efficiency of the HSF combination when clean, but helps prevent 
the heatsink from getting clogged with debris. The "reversed airflow" trick 
is particularly effective when combined with an adapter allowing a 80 mm fan 
to be fitted to a heatsink designed for a 60mm fan; the advantage here is 
that the larger fan can be much quieter, due to being able to shift air at 
the same rate whilst rotating at a much lower speed.

> > to test your processor, i recommend www.memtest86.com
> > it is a memtest in first place, of course, but it also tests the
> > processor
> >
> > You can test your system with madonions 3DMark 2001 SE. This  program
> > will heat up your ram, cpu and grafik card.

If you are running Windows, you could also check out Sisoft Sandra. The 
"free" limited edition will do enough.

http://www.sisoftware.co.uk/sandra

One final point - I heard of a system built by a well-known overclocking 
expert who had a problem with the system "blue screening" at intervals for no 
apparent reason. (Sounds typical of Windows systems to me; however...) 
The problem persisted even when everything was returned to rated speed. 
Eventually it was traced to the chipset fan; occasionally this would seize 
(probably due to blade fluff shedding) for a few minutes, but then restart 
itself, so that a quick check of the operation of the fan showed no obvious 
fault. Operating the system with the cover off, it was noticed next time that 
the system crashed, the chipset fan was not running. 

If you _do_ have to deal with a suspect/failed chipset fan, it is now 
possible to obtain large area passive heatsinks which can be used to replace 
the chipset HSF; these are ovbiously going to be more reliable & less noisy 
than a replacement fan, though they may not be suitable for (or indeed 
compatible with) all motherboards.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Damage done by overclocked machines

2002-07-11 Thread Brian J. Beesley

On Thursday 11 July 2002 03:43, George Pantazopoulos wrote:
> Hey all,
>   If an overclocked machine is producing erroneous results, how much harm
> does it to the project as a whole? Can it miss the next Mersenne prime?
> Will the rest of the group assume that there is officially no Mersenne
> prime at the missed location and not double-check?
>
(Apart from the fact that there are lots of reasons other than overclocking 
why a result might be in error!)

That's the whole point of double-checking

There IS a VERY SMALL chance that a double-checked result would be wrong. 
This will happen if BOTH runs go wrong and the final residual is the same. 
The chance of this happenning with independent random errors is obviously 
less than 1 in 2^64; this is about the same chance that the same balls will 
be drawn four weeks running in a 6/49 lottery game, so we don't worry too 
much about it.

The chance of missing a prime is much smaller than that, because the wrong 
result would have to belong to a number which really is prime.

However, there are a number of exponents where one or other of the runs was 
made with an old client which reported 16 (sometimes even less) biys of the 
residual. Clearly there is a much higher chance that one of these might be 
wrong. For some time, some of us have been systematically working our way 
through these running a triple-check, but there are still a few thousand left.
This is an ideal project for systems too slow to be useful otherwise, as the 
remaining exponents are all less than 2 million (in fact about 40% of them 
are less than 800,000).

Anyone interested in contributing, please e-mail me directly.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Prime95 21.4.1 miscomputation of completion date--my setup or bug?

2002-07-11 Thread Brian J. Beesley

On Wednesday 10 July 2002 22:38, Gerry Snyder wrote:
> I am puzzled. I am running two copies of prime95 (one with no options,
> one with -A1) on a dual 1 GHz P3 computer under Windows 2K. One is
> factoring, and the other is doing an LL test of an exponent in the
> 15,xxx,xxx range. The torture test was run for several hours with no
> problems. W2K recognizes the two CPU's and the task manager shows each
> mprime getting 50% of the total CPU time and almost no idle time.
>
> The only problem is a galloping completion date. After about 11 days of
> execution (split among several reboot sessions), with some time spent in
> trial factoring and P-1 factoring, the LL is over 26% complete. The
> per-iteration of .191 seconds indicates a total LL time of around 33
> days. Right after I start it running, the completion date is reasonable.
> But every few hours it contacts the PrimeNet Server to extend the
> completion date by weeks. The "Status" now menu shows completion in
> April 2003 (this execution has been for about 5 days).
>
> Any idea what could be the problem?

Did you change the CPU type/speed - perhaps after importing local.ini from a 
different system? Keep a track of the changing value of RollingAverage in 
local.ini; if it's a long way different from 1000 (say outside the range 
500-1500) try stopping Prime95, editing RollingAverage to 1000 & restarting. 

There have been problems with some versions of Prime95/mprime where the CPU 
type would be detected wrongly. Suggest changing to v22.1 (or v22.5) as v22 
has much better CPU type/speed detection code - automatic, too, so you can't 
mis-set it :-)

BTW my dual 1 GHz PIII system, with a similar work loading to yours, has 
RollingAverage=1128 on the CPU running LL tests.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Extrange assignament

2002-07-11 Thread Brian J. Beesley

On Thursday 11 July 2002 00:00, you wrote:
>
> Yesterday, Primenete did assigned to one of the computers that I manage,
> a exponent in the 8 million rank, for first test, not for doublecheck.
> But in the Status page, this rank is complete for first test ...
>
> How is it possible?
>
> The factorization level is 1, if it help ...

I've had a few of these (as DC assignments).

Factorization level 1 is WRONG. The correct value can be obtained from the 
database files - you need nofactor.zip (and the unpacking tool decomp). If 
you get one a sobviously wrong as this, the best thing to do is to manually 
change the factoring depth in worktodo.ini. 64 is about right for 8 million 
range exponents.

Regards
Brian Beesley
>
>
> Saludos,
>
> Ignacio Larrosa Cañestro
> A Coruña (España)
> ¡¡NUEVA DIRECCIÓN!!:
> [EMAIL PROTECTED]
>
> _
> Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
> Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Prime95 as an NT/2000/XP service

2002-06-26 Thread Brian J. Beesley

On Wednesday 26 June 2002 04:46, George Woltman wrote:
> I've spent a few days fighting with Windows and MFC to make Prime95 run as
> a true
> Windows NT Service.  That is, when you check the "Start at Bootup" menu
> choice, prime95 is installed as a service.  At next bootup it starts before
> anyone logs in.
> At first login, the familiar red icon appears in the system tray, and
> prime95 keeps
> running even when you log off.
>
> This question is for the serious NT sysadmins out there:  Given that
> Microsoft strongly discourages NT services having a GUI interface, are
> there any problems or security issues I need to worry about?  A GUI service
> must run under the Local
> System account.  You can still use Hide Icon to make the service virtually
> invisible to all users.

I can't be accused of being a "serious" NT sysadmin. But, with considerable 
experience in general system & network security, I think running _anything_ 
under the local system account is Best Avoided (tm). Unless (a) you trust 
your local users and (b) the process(es) never make or respond to network 
connections. The reason for (b) is pretty obvious; my concerns about (a) are 
based on the fact that some weakness in the application or the libraries it 
calls usually make it possible for a local user to leverage priveleges.

A great deal of development work in the *n*x environment is being put into 
making as little as possible run as root, e.g. in OpenSSH v3.3 (released this 
week) the daemon runs all the network code in user space (as the logged-on 
user, or an unpriveleged "dummy" user until login is complete) rather than as 
root. That way, even if anyone does penetrate the armour, they don't have 
root privelege, so the damage they can do to the system is limited.
>
> Even if there are problems, I think this will work well for naive home
> users running
> WinXP with multiple user accounts. 

Yes, in a home situation the risk should be acceptable - provided network 
access is strictly controlled through a properly configured personal 
firewall. (This is of course an absolute neccessity in any case if you have a 
permanent network connection e.g. cable modem or xDSL connection.)

But, in an office situation (where Prime95/NTPrime is soaking up waste cycles 
on an office server) I'd be somewhat dubious. 

> My hope is to eliminate the NTsetup and
> NTPrime programs with this feature.

Umm - what is the problem with keeping these? I thought the code was pretty 
well integrated & the extra compilation time cannot be crippling?

Really dumb question - why does a service need a GUI interface at all? mprime 
manages without one! The only real problem with "mprime -m" is that, if you 
change something in the .ini files, you have to stop & restart the service to 
persuade mprime to re-read the .ini files.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: mprime crashes but Prime95 doesn't

2002-06-25 Thread Brian J. Beesley

On Monday 24 June 2002 22:20, Pierre Abbat wrote:
> I got a new laptop a few weeks ago. I promptly loaded Linux on it and ran
> into hardware problems. I copied statically linked mprime to it and ran it;
> it crashed in a few minutes. I sent it back.
>
> The technician knows nothing about Linux. He loaded Windows and ran some
> test, which found nothing. I told him about mprime, so he loaded Prime95 on
> it and ran it for at least a day with no errors.
>
> The processor is an Athlon. The kernel I ran on it was 2.4.17 from a
> Sourcemage install. Is there any problem with that version?

Not so far as I know. (I tend to use stock Red Hat images)

However there is a major difference in the way in which Windows & linux 
allocate memory. Windows from the top, linux from the bottom. If you have 
significantly more memory than the minimum required (which is probably only 
around 32 MBytes) then mprime may be crashing because it's hitting bad 
memory, which Windows is not using. Try running memtest86 or MEMT25.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Re: Mersenne: Slow Pentium 4 question - status report

2002-06-15 Thread Brian J. Beesley

On Thursday 13 June 2002 23:56, Bockhorst, Roland P HQISEC wrote:
> Gentlemen;
> Thank you for your help.
> My P4 is successfully working on its second 15,000,000 range number.
> The first number was found to be not prime in about three months full time.
> It should have taken a month, hence this discussion.

Umm. I find my P4 1.8A takes just one week to process a LL test on an 
exponent just under 15 million. Running 24x7 of course!
>
> WCPUID recognizes my P4 and its having SSE and SSE2 instructions.
> Prime95V22.3 doesn't.

This is very odd 
>
> >Could the CPU be overheating?
>
> This is a good idea to pursue.
>
The P4 thermal slowdown is easy to diagnose - the speed shown by the 
diagnostic output from Prime95 will vary depending on the ambient 
temperature. Also, if you stop Prime95, wait a few minutes and continue again 
(using the Test menu), the CPU will cool down & the diagnostic output will 
show it starts very fast then slows down over a minute or two as the system 
warms up again.

If you do have this problem, there are now available some very good P4 CPU 
coolers, and they aren't neccessarily noisier than the standard Intel part. 

A good tip with Intel "retail pack" CPUs is to carefully remove the thermal 
goo which is stuck to the bottom of the supplied heatsink - carefully scrape 
the bulk off with a soft edge e.g. a plastic credit card, NOT a knife which 
will scratch the heatsink mating surface; then remove the residue with white 
spirit, then methylated spirit. Allow to dry then apply a good thermal 
compound like Arctic Silver II in accordance with the instructions on the web 
site. This will, on its own, reduce the CPU die temperature by around 5C.
>
> >Win95 unless another SSE/SSE2 ... timesharing
>
> Good point
>
> >Surely the problem is that a system with extra registers will use more
>
> stack
>
> >when the "save all registers" opcode is executed. If so, the OS need not
> >support SSE/SSE2 directly - but there might be a problem with crashing
> >through the stack base.
>
>  hum   the error was "illegal instruction"

A stack overflow will usually cause system hang or spontaneous reboot.

> . I recently bought a new license for Win98.

That's fine then. But where from? My understanding is that Win 9x licences 
are no longer available ... the official MS line seems to be that you can buy 
a licence for ME or XP Home Edition but install Win 9x provided that the copy 
of ME or XP HE is not installed on another system simultaneously and on the 
understanding that MS will do nothing to support you technically. (I believe 
Windows Update still works with Win 98 but I don't know when the last update 
to any '98-specific component was posted.)

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Slow Pentium 4 question - status report

2002-06-14 Thread Brian J. Beesley

Hi,

Check out http://www.theregister.co.uk/content/archive/25085.html

Microsoft do seem to chop & change as to some of the more ridiculous 
extensions of what their EULA actually says. Some of us are just happier to 
sidestep the issue altogether.

My employer's policy is to permanently remove all software or physically 
remove & destroy the hard disk drive before a system is passed to any third 
party, even if it's being removed directly to a landfill.

Regards
Brian Beesley

On Friday 14 June 2002 05:09, Brian Dessent wrote:
> John R Pierce wrote:
> > I'd like to know the source of this story  Sounds like urban folklore
> > to me... The OEM Windows license is bundled with and tied to the hardware
> > and automatically transfers with it.   Now, if these recycling projects
> > were taking bulk OEM CD's purchased off the grey market, and bundling
> > them with recycled hardware without having a redistribution agreement,
> > thats another story entirely.   Ditto, if the EULA for the original
> > system was lost and not kept with it when the system was recycled...
>
> I think the problem stems from the fact that most donated PCs with MS
> OSes do not arrive with the full documentation of the original OS
> license.  Organizations who accept and use these PCs without all the
> proper paperwork could technically be found in violation by MS or its
> BSA goons.  Hence they are hesitant to accept any donations without all
> the paperwork.  Since the OS is tied to the machine, the donating
> company cannot reuse the OS license if they donate the machine.  This
> further complicates things since the donating company must prove that
> they have transferred all the licensing paperwork, unless they wipe the
> drives of every machine.  If the donating party does not buy new
> licenses for the machines that replace the donated ones, or they fail to
> transfer/destroy all of the bits relating to the donated machines, then
> they are in violation as well.
>
> By making it hard on both the donating and receiving parties, MS ends up
> selling new licenses to everyone, which is probably a contributing
> factor to why they're stinky filthy rich.
>
> Brian
>
> From 
>
> Q. What does the donor need to do to donate a PC with the operating
> system?
>
> A. PC owners have to transfer their license rights to the operating
> system to your school along with the PC. They may do so as specified in
> their End-User License Agreement (received at the time of purchase) as
> part of a permanent sale or transfer of the PC.
>
> Q. What if the donor can't find the backup CDs, End-Use License
> Agreement, End-User manual and the Certificate of Authenticity? Can they
> still donate the PC and operating system?
>
> A. Microsoft recommends that educational institutions only accept
> computer donations that are accompanied by proper operating system
> documentation. If the donor cannot provide this documentation, it is
> recommended that you decline the donated PC(s).
> _
> Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
> Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 Puzzle

2002-06-11 Thread Brian J. Beesley

On Tuesday 11 June 2002 06:13, Daran wrote:

[... snip ... interesting but non-contentious]

> Very noticable is the proportion of exponents - in all three ranges - which
> are not getting a stage two effort at all.  26 out the 85 exponents between
> 795 and 796000, 24 out of 54 between 1550 and 15505000, 35 out of
> 57 between 33219000 and 33223000.  I do not believe that large numbers of
> P4 systems are being shipped with just 8MB of RAM!

This is true. However the philosophy of the project, correctly in my view, is 
that the software should not cause noticeable deterioration in the 
performance of a system when it is being run in the background to normal work.

If the software were to allow itself to use a substantial portion of the 
system memory (even temporarily during P-1 stage 2 only), anyone using a 
system monitoring tool to find out why the system performance had dropped 
would no doubt discover a process with a large working set, blame it 
(correctly), and the project would lose a participant. Or possibly many 
participants, depending on how far and by what route the news spreads.

Yes, I know it's possible to tinker with the different day/night memory 
allowances, even to schedule the program not to run at times when it might 
get in the way. But these sorts of things can only be configured locally; 
working times vary, as does the convention as to whether the system clock is 
set to local time or GMT/UTC.

The default has to be "safe"; IMO the current default memory allowance of 8MB 
is entirely reasonable, even though it causes P-1 to run stage 1 only for any 
realistic assignment, and even though _new_ systems are usually delivered 
with at least 256 MB RAM.

Running P-1 on a "10 million digit" exponent requires in excess of 64 MB 
memory to be allocated in order to run stage 2 at all. That's a lot to ask as 
a default!

BTW you may have noticed that, when a system runs P-1 stage 1 only, it runs 
to a higher limit than the stage 1 limit used for neighbouring exponents 
which have had a P-1 run of both stages. The P-1 factoring run by these 
systems is still useful to the project!

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 Puzzle

2002-06-09 Thread Brian J. Beesley

On Sunday 09 June 2002 08:22, Daran wrote:
> I'm currently concentrating exclusively on P-1 work.  The primenet server
> doesn't support this as a dedicated work type, so my procedure is to
> reserve some DC exponants, imediately unreserve any which have the P-1 bit
> already set, P-1 test the rest, then unreserve them without doing any LL
> testing.
>
> One problem I have discovered is that the server doesn't always 'recognise'
> that a P-1 result has been returned.  It can take several days before my
> individual account report removes the * indicating that factoring work is
> necessary.  In these cases I hold on to the exponant until the result is
> recognised in order to stop the subsequent 'owner' from doing a redundant
> P-1 check.  In other cases, the P-1 result is recognised imediately.

Though I'm not looking for P-1 specifically, I have seen something similar on 
a large number of occasions.

My current assignment report - the DC part of which follows - contains a 
number of examples. 

 6493831 D   64   3.3  33.8  93.8  07-Jun-02 07:25  06-Jun-02 
06:02  cabbage 0 v18
 6530189 D   64   2.3  27.8  64.8  08-Jun-02 06:02  07-Jun-02 
06:02  nessus-b  266 v19/v20
 6672569 D   64  31.3  13.8  73.8  14-May-02 07:43  09-May-02 
06:05  cabbage 0 v18
 6881321 D   64   6.3  23.8  63.8  06-Jun-02 06:06  03-Jun-02 
06:06  nessus-j  332 v19/v20
 6972001 D*  64   0.3  14.7  60.7   09-Jun-02 
04:02  caterpillar   654 v19/v20
 7009609 D   63   394908824.3   9.8  64.8  07-Jun-02 06:04  16-May-02 
06:06  nessus-m  266 v19/v20
 7068857 D   63   588757825.3   0.8  60.8  06-Jun-02 06:06  15-May-02 
06:05  nessus-j  332 v19/v20
 7076669 D*  64   561798830.3   3.8  63.8  07-Jun-02 06:02  10-May-02 
06:04  nessus-b  266 v19/v20
 7099163 D   63   269335914.3  11.8  65.8  09-Jun-02 06:26  26-May-02 
06:43  T4070 366 v19/v20
 7908091 D   64   308019117.7  15.4  60.4  08-Jun-02 21:12  22-May-02 
19:17  broccoli  400 v19/v20
 7937717 D   64   235929510.5   7.6  60.6  09-Jun-02 02:04  30-May-02 
00:30  caterpillar   654 v19/v20
 7938407 D   64   131072010.3  12.3  60.3  08-Jun-02 20:29  30-May-02 
04:16  vision.artb   495 v19/v20
 7940447 D   64   9.8  16.8  65.8  09-Jun-02 06:24  30-May-02 
17:39  Simon1   1002 v19/v20
 7951049 D   64 65536 7.5  10.7  60.7  09-Jun-02 04:31  02-Jun-02 
00:40  rhubarb   697 v19/v20

6972001 and 7076669 are "starred" although the "fact bits" column seems to 
indicate that both trial factoring to 2^63 and P-1 have been run. This is 
_definitely_ true for P-1 on 7076669, the fact is recorded on my system in 
both results.txt & prime.log. So far as 6972001 is concerned, the database 
(dated 2nd June) indicates P-1 has been run to a reasonable depth but trial 
factoring has only been done through 2^62. My system definitely won't have 
done any more trial factoring yet, let alone reported anything, since that 
system is set up with v22 defaults i.e. defer factoring on new assignments 
until they reach the head of the queue.

7009609, 7068857 & 7099163 are not "starred" although the "fact bits" column 
is one short. The "nofactor" & "Pminus1" databases (dated 2nd July) give 
these all trial factored through 2^62 & Pminus1 checked to B1=35000, 
B2=297500 (or higher). The P-1 limits seem sensible for DC assignments, but 
shouldn't these have been trial factored through 2^63 like most of the other 
exponents in this range?
>
> Currently, I have nine exponants 'warehoused' whose P-1 results have been
> returned but not recognised, the oldest was done on May 14, which is rather
> longer than I would expect.  There's no question that the server has
> correctly recieved the result, because it is contained in a recent version
> of the pminus1.zip file downloaded this morning along with another four
> exponants 'warehoused' from May 20.  Three more, whose results were
> returned on June 3 have not yet been recorded in this file.
>
> There is an entry in the file for the last of the nine, returned on June 5,
> but the limits are much smaller than the test I did.  The most likely
> explanation is this is a previous owner's P-1 result which wasn't
> recognised before the exponant was given to me.

I wonder what happens if you're working like Daran and someone returns a P-1 
result "independently" (either working outside PrimeNet assignments, or 
perhaps letting an assigned exponent expire but then reporting results); if 
PrimeNet gets two P-1 results for the same exponent, which does it keep?

This is not trivial; e.g. if you get "no factors, B1=10, B2=100" and 
"no factors, B1=20, B2=20" there might still be a factor which would 
be found if you ran with B1=20, B2=100. Also, if the database says 
that P-1 stage 1 only has been run (probably due to memory constraints on the 
system it ran on), at what point is 

Mersenne: Refinement of Factoring

2002-06-08 Thread Brian J. Beesley

Hi,

I was thinking about how we could improve the productivity of the project by 
reducing the proportion of candidates requiring LL testing, and had the 
following idea.

P-1 factoring is useful when applied to Mersenne numbers because M(p)-1 is 
easily factored: M(p)-1 = (2^p-1) -1 = 2^p-2 = 2.(2^(p-1)-1)

The idea of P-1 factoring (stage 1) is to compute X = 2^k! mod N where k is 
the B1 limit and N is the number to be factored. Now compute GCD(X-1,N); with 
luck (if there is a factor F of N for which F-1 is k-smooth) the result will 
be F rather than 1.

However, _for Mersenne numbers_, the principle can be extended as follows:

M(p)-M(q) = 2^(p-q).(2^(p-q)-1)

Having computed X as above, we can now compute successively

GCD(X-1,N)
GCD(X-3,N)
GCD(X-7,N)
...
(until we get bored, or run out of qhttp://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Quicker Multiplying ?

2002-05-28 Thread Brian J. Beesley

On Tuesday 28 May 2002 02:43, you wrote:
> 6 is -2 mod 8
> 6*6 = 36
> 36 = -4 mod 8
> 2^2 = 4
>
> if the mod of the represented as a negative is much less than the positive,
> could we square the negative and save some time ?

Sure we could.

However on average we would save 1 bit 25% of the time, 2 bits 12.5% of the 
time, 3 bits 6.25% of the time  on average the saving is 0.5 bits.

Out of several million.

In fact the _practical_ saving is zero, as we aren't ever going to save 
enough bits to justify using a shorter FFT run length, even just for one 
iteration. 

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Roundoff Checking

2002-05-25 Thread Brian J. Beesley

On Saturday 25 May 2002 22:19, you wrote:

> I noticed that v22.2 and v22.3 automatically do roundoff checking every
> iteration for any exponent close enough to the FFT limit.  Is there any
> reason to be concerned about the possibility of roundoff error for CPUs
> that aren't P4s?  

I don't think so. We are looking at the x87 (non-SSE2) code and may make some 
minor adjustments to the FFT run length crossover points, but there is a lot 
of "experimental evidence" relating to non-SSE2 code; the adjustments are 
probably as likely to be up as down.

Please remember that the crossover points are a compromise between wasting 
time by using an excessive FFT run length and wasting time due to runs 
failing (or needing extra checking) due to using a FFT run length which is 
really too short. There is no completely safe figure.

> What about if the non-P4s are only doing double checks?

This doesn't really matter. Double checks are independent of the first test. 
Don't assume that the first test was correct... if you make that assumption, 
what's the point in running a double-check at all?

> Since numbers of double checking size have been checked by non-P4s for
> years without any problems that I've heard about.

The point is, if you do get an excess roundoff error that makes the run go 
bad, the double-check (when it is eventually done) will fail, and the 
exponent will have to be tested again. There is essentially no possibility of 
the project missing a prime as a consequence of this. However, if you can 
detect the likelihood of there being excess roundoff errors at the time 
they're occurring, you can save time which would be wasted if you continue a 
run which has already gone wrong. This also virtually eliminates the 
possibility of you, personally, missing a prime due to a crossover being too 
aggressive and therefore falling victim to an undetected excess roundoff 
error.

We simply don't know if there are extra problems occurring very close to the 
existing non-SSE2 crossover points as any "genuine" errors caused by the 
crossover points being too aggressive are overwhelmed by errors caused by 
"random" hardware/software glitches. However it has become apparent that the 
SSE2 crossover points were initially set too aggressively. We do have one 
documented instance of where a roundoff error of 0.59375 occurred (aliased to 
0.40625, therefore causing a run to go bad) without there being any other 
instances of roundoff errors between 0.40625 & 0.5. This is probably a very, 
very rare event, but the fact that it has happened at all has made us more 
wary.

v22.3 has a new error checking method which will _correct_ any run which is 
going wrong by running the iteration where the excess roundoff error occurs 
in a slow but safe mode. This of course depends on the excess roundoff error 
being detected. If you have roundoff error checking disabled then you miss 
the chance 127 times out of 128.

The roundoff error rises very rapidly with the exponent size - somewhere 
round about the 25th power. This is why it's only worthwhile having roundoff 
error checking every iteration in the top 0.5% or so of the exponent range 
for any particular run length - that 0.5% makes a lot more than 10% 
difference to the expected maximum roundoff error.

Why not just set the crossovers lower? Well, this would work, but running 
with roundoff checking enabled is faster than running with the next bigger 
FFT run length but without roundoff checking.

Another consequence of having roundoff error checking enabled is that random 
hardware glitches (or software glitches due to misbehaviour by device drivers 
etc. unrelated to Prime95) will be detected much more consistently.

> Very specifically, I'm
> wondering if I should be ok if I use the "undocumented" setting in
> prime.ini to turn off roundoff checking every iteration for when my Pentium
> 200 MHz double checks 6502049 ( the next FFT size is at 652 ).  Thanks.

Up to you. My feeling is that the new default behaviour is right. However 
per-iteration roundoff checking probably causes more of a performance hit on 
Pentium architecture than on PPro or P4 due to the relative shortage of 
registers.

Another point here, if people using v22.3+ leave the default behaviour, we 
will get a lot better evidence as to the actual behaviour in the critical 
region just below the run length crossovers; we will be able to feed this 
back in the form of revised crossovers and/or auto roundoff error check range 
limit. 

QA work should prevent gross errors, but the amount of data which QA 
volunteers can process is small compared to the total throughput of the 
project. We should have avoided the problems with the aggressive SSE2 
crossovers, but QA volunteers didn't have P4 systems at the time the code was 
introduced.

Regards
Brian Beesley

_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
M

Re: Mersenne: This supercomputer is cool

2002-05-22 Thread Brian J. Beesley

On Tuesday 21 May 2002 16:21, [EMAIL PROTECTED] wrote:
> http://www.cnn.com/2002/TECH/industry/05/21/supercomputing.future.idg/index
>.htm
>
> l
>
> The theme of reducing transistor count without sacrificing much performance
> is an interesting one. 

This is indeed interesting. The problem seems to be that the sustained 
floating-point performance of Transmeta chips seems to be at best only 
similar to that of PIII or Athlon chips when scaled by the power consumption 
of the chip itself. For our purposes, the transistor-heavy SSE2 unit on the 
P4 gives a much larger performance improvement than the resulting increase in 
power consumption. Add on the power consumed by support devices (chipset, 
memory etc) and the Transmeta design doesn't look too effective.

In a situation where _on average_ only around 10% of the actual peak 
performance is required, the Transmeta design has considerable advantage, due 
to its capability to idle on very low current.

>From the ecological point of view, one could easily make gains by using the 
"waste" heat from processors - ability to "pipe" it into building heating 
systems (at least the "hot" side of a heat pump) would be more useful than 
dispersing it locally as hot air.

> Some obvious possibilities I can think of, related
> to the
> way typical CPUs do hardware arithmetic:
>
> 1) For floating-point add, have a dedicated unit to do exponent extraction
> and mantissa shift count generation, then do the actual add in the integer
> adder (there are often multiple integer adders on the chip, so this need
> not stall genuine integer adds that are occurring at the same time).
>
> 2) Similarly, use the intger multiplier to multiply the 2 floating
> mantissas in an FMUL together. For IEEE doubles this would need to generate
> the upper 53 bits of a 106-bit product, but many CPUs already can generate
> a full 128-bit integer product (this is useful for cryptographic
> applications, among other things), so one could use the same hardware for
> both.

Most of the transistors in a typical modern CPU die are associated with cache 
& multiple parallel execution units. Cutting out one execution unit - one of 
the simpler ones at that - probably wouldn't save much power.
>
> 3) Have the compiler look for operations that can be streamlined at the
> hardware level. For example, a very common operation sequence in doing
> Fourier (and other) transforms is the pair
>
> a = x + y
> b = x - y .
>
> If these are floating-point operands, one would need to do the exponent
> extract and mantissa shift of x and y just once, and then do an integer
> add and subtract on the aligned mantissa pair. It might even be possible
> to do a pairwise integer add/sub cheaper than 2 independent operations
> at the hardware level (for instance, the 2 operators need only be loaded
> once, if the hardware permits multiple operations without intervening loads
> and stores (and yes, this does run counter to the typical load/store RISC
> paradigm).

This is a Good Idea and would cost very little extra silicon. There is 
however a synchronization problem resulting from always doing add & subtract 
in parallel (& discarding the extra result when only one is needed). This is 
because (when operands have the same sign) renormalizing after addition 
requires at most one bit shift in the mantissa, whereas after subtraction one 
may require a large number of bit shifts; indeed we may even end up with a 
zero result. Fixing this sychronization problem requires either extra silicon 
in the execution unit, or a more complex pipeline.
>
> 4) Emulate complex functions, rather than adding hardware to support them.
> For instance, square root and divide can both be done using just multiply
> and add by way of a Newtonian-style iterative procedure. The downside is
> that this generally requires one to sacrifice full compliance with the IEEE
> standard, but hardware manufacturers have long played this kind of game,
> anyway - offer full compliance in some form (possibly even via software
> emulation), but relax it for various fast arithmetic operations, as needed.

Yes. That's why FP divide is so slow on Intel chips. 
>
> 5) Use a smallish register set (perhaps even use a single general-purpose
> register set for both integer and floating data) and memory caches, but
> support a
> variety of memory prefetch operations to hide the resulting main memory
> latency
> insofar as possible.

I get suspicious here. Extra "working" registers enable the compiler to 
generate efficient code easily, and (given that memory busses are so much 
slower than internal pipelines) bigger caches always repay handsomely in 
terms of system performance.

(Incidentally there may be an impact on Prime95/mprime here. The new Intel 
P4-based Celerons have 128KB L2 cache; I believe the SSE2 code is optimized 
for 256KB L2 cache, so running SSE2 on P4 Celerons may be considerably less 
efficient than it might be.)
>
> Others can surely add to this list, 

Re: Mersenne: electrical energy needed to run a LL-Test?

2002-04-28 Thread Brian J. Beesley

On Saturday 27 April 2002 21:26, Paul Leyland wrote:
> [... snip ...]
> They are still doing sterling service as fan heaters to keep my study
> warm (it's not easy living at a latitude of 52 degrees north ;-) and
> happen to factor integers by ECM while doing so.My 21-inch Hitachi
> monitor cost me the grand sum of 4 GBP (approximately 6 USD) and also
> keeps my study warm when it's switched on.

Oh sure. (Actually I'm living on a windy coast at 55N, furthermore my house 
has electric storage heating so "waste" heat from computers during the winter 
months has a very low cost).
>
> My moral is: don't over look the benefits of the "waste" heat if you
> live in a climate where you have to spend energy to keep warm.   If you
> live somewhere which requires the expenditure of energy to keep cool,
> the balance may lie elsewhere.

Indeed. I have to switch off some (less productive) systems during the summer 
just to keep my indoor environment reasonably comfortable. I don't have air 
conditioning - the climate hereabouts doesn't often warrant it - but running 
AC to dump waste heat from computers would approximately double the effective 
cost of the energy consumed.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



  1   2   3   4   5   >