Re: Mersenne: /. article

2002-02-26 Thread bjb

On 26 Feb 2002, at 19:46, Henk Stokhorst wrote:

> http://slashdot.org
> 
> factoring breakthrough?
> 
Doesn't look like a breakthrough, although there may be a very 
significant reduction in the amount of work required to factor 
"awkward" numbers.

The implications in terms of public key cryptography look as 
though they could be significant - those "secure" 128-bit cyphers in 
widespread use for e-commerce are starting to look pretty 
transparent, but doubling the number of bits in the key is more than 
sufficient to "defeat" this latest advance.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Assignment balancing (was: Re: Mersenne: Re: Trial Factoring: Back to the math)

2002-02-17 Thread bjb

On 17 Feb 2002, at 17:54, Russel Brooks wrote:

> Am I interpreting this thread correctly?  That more factoring is
> needed?  My climb up the LL top producers is starting to stall
> so maybe it's time to switch to factoring.

I'm quite sure there's no need to panic!

So far as I'm concerned, I've been underdoing factoring - less than 
5% of my contribution. Theoretically the best balance for the 
project as a whole is about 10% of the CPU effort in trial factoring.

The other point here is that we have had two major improvements in 
the efficiency of the LL testing algorithm since the last time trial 
factoring was seriously looked at, so some theoretical work on trial 
factoring is probably due.

Why not try some factoring assignments, if _you_ think it will be 
fun for a change? You can always change back when you get 
bored...
> 
> I know the LL tests drive the FPU hard, what about factoring?

Factoring is a light PFU and memory load. LL (and DC) 
assignments are heavy on both.

Indications to run factoring irrespective of CPU speed:

1) One CPU on a dual processor system. Both processors running 
LL tests hits the memory bus too hard & ruins performance.
2) Some processors (AMD K6, and especially Cyrix 6x6) have 
inefficient FPUs; it is much more effective to have these processors 
running trial factoring.
3) A PIII or Athlon system running very close to the limit of the 
cooling system. Because the load on the FPU imposed by 
factoring is much lower, the power consumption of the CPU will 
reduce if factoring is run instead of LL testing, and the temperature 
will therefore drop significantly.
4) System very short of main memory. Trial factoring uses very 
little memory for data work space.

Counterindications:

It doesn't seem to be sensible to be running trial factoring on a P4. 
The P4 architecture is _very_ efficient at running LL tests, but the 
performance on the instruction mix used in trial factoring is a great 
deal worse (clock for clock) than PII/PIII/Athlon.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Trial Factoring: Back to the math

2002-02-17 Thread bjb

On 16 Feb 2002, at 12:26, Mary Conner wrote:

> Trial factoring is well ahead of LL testing, but the gap is closing.  
> Yesterday was the first day in a long time where the net number of
> checkouts for factoring exceeded those for first time LL's.  That is due
> to the fact that one team is having a factoring challenge among their
> membership due to begin soon, and large numbers of factoring assignments
> are being checked out in preparation.  It isn't a trend that is going to
> persist.  Ongoing, more factoring assignments have to be done than LL
> assignments because factoring eliminates a substantial number of exponents
> from LL testing.

Ah, but, factoring is actually moving ahead anyway in terms of 
CPU years required for LL to catch up - because larger exponents 
take more time to LL test. My impression is that the gap between 
the head of the factoring assignment queue and the head of the LL 
testing assignment queue has remained fairly steady at 4 to 5 
million for at least three years.

Nevertheless I've just tripled my factoring effort from a single PIII-
500 by switching one CPU on a dual processor PIII-1000 system 
from ECM to factoring. The other CPU is of course running LL tests.
> 
> The situation has improved recently from two earlier surges in LL testing.  
> One accompanied the discovery of M39, the other was a post-Christmas surge
> (GIMPS participants with shiny new fast computers replacing slower ones,
> presumably).  Many of those who joined up after M39 have dropped out, and
> the crest of the wave of their expiries has passed (over 150 assignments
> started on Nov. 14th expired in one day, for instance).  For awhile, the
> balance was around 300 net LL's checked out to 150 factoring assignments.  
> That has narrowed to around 250 to 200, and sometimes comes fairly even,
> although to keep up, GIMPS needs more factoring assignments done than
> LL's.

Another point, surely, is that factoring assignments "suddenly 
slowed down" when we reached the changeover point between 65 
bit and 66 bit trial factoring depth in the low 18 millions.
> 
> The cutoffs for those computers set to "do the work that makes the most
> sense" can be adjusted to put more computers on trial factoring work if LL
> testing starts to overrun trial factoring.  The cutoffs could also move
> some computers doing first time LL onto doing double checks, although too
> much would have the leading edge of DC's running into the trailing edge of
> LL's, and I understand that PrimeNet has difficulty with that situation.  

Eh? The only PrimeNet problem I'm aware of with in this respect is 
that it can't cope with having LL and DC assignments for the same 
exponent out at the same time.

> Right now DC checkouts are going very slowly, less than 100 a day most
> days.

Well, some of us try - personally I have several systems which 
would be running LL assignments if left to "makes most sense" 
running DCs instead. 

The difficulty here (it's purely a matter of perception, not a technical 
problem) is that, by pushing up the number of systems that get DC 
assignments by default, we risk alienating new users who find that 
they're not being given "exciting" assignments, and those with 
slower systems who find that even DC assignments are taking well 
over a month.
> 
> Something that would likely help a lot would be to add factoring
> capability and PrimeNet access code to those programs that don't yet have
> it so that more computers can reasonably participate.   

Yes. There are also a lot of non-Intel un*x systems out there, most 
of which would be happier with DC than LL assignments, that we 
could probably attract if we had an automatic interface for obtaining 
assignments from PrimeNet and submitting results of completed 
tests. At least two reasonably efficient and reliable clients (Mlucas 
and Glucas) are already available for a good range of host 
systems; I'm sure the perceived complications of using the manual 
assignment forms are responsible for at least some users not 
participating.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Trial Factoring: Back to the math

2002-02-16 Thread bjb

On 16 Feb 2002, at 15:42, George Woltman wrote:

> To put it in perspective, let's say you are looking for 64 bit factors
> of a 10,000,000 bit number.
> 
> The basic algorithm is:
>  Multiply 156,250 trial factors together to form a 10,000,000 bit 
> number.
>  do {
>  Multiply next 156,250 trial factors making 2nd 10M bit number.
>  Multiply the two 10,000,000 bit numbers together mod 2^P-1
>  } while there are more 64-bit factors
>  GCD (10,000,000 bit product, 2^P-1)

My suggested modification groups things a bit differently: by and 
large we're working in powers of 2, so the first step of the inner loop 
would go something like

Multiply 131072 64-bit numbers together in pairs
Multiply the resulting 65536 128-bit products together in pairs
...
Multiply the resulting four 4194404-bit products together in pairs
Multiply the resulting two 8388608-bit products together modulo 
2^p-1

Repeat the above for the next block of 131072 64-bit numbers

Multiply the resulting two 10M-bit products together and reduce 
modulo 2^P-1

etc. etc.

So for each block of 131072 numbers we have
65536 64-bit multiplications
32768 128-bit multiplications
16384 256-bit multiplications
8192 512-bit multiplications
4096 1024-bit multiplications
2048 2048-bit multiplications
1024 8192-bit multiplications
512 16384-bit multiplications
256 32768-bit multiplications
128 65536-bit multiplications
64 131072-bit multiplications
32 262144-bit multiplications
16 524288-bit multiplications
8 1048576-bit multiplications
4 2097152-bit multiplications
2 4194304-bit multiplications
and one 8388608 x 10M bit multiplication modulo 2^p-1

There seems to be about the same amount of work in each level, 
nevertheless the lower levels should be faster since even FFT is 
O(n log n) and we can probably do better than FFT for the very 
smallest products.
> 
> Looking at the inner loop the second step uses the fast LL multiply code.
> On my P4-1400 this takes 0.06 seconds or 84 million clocks or (dividing
> by 156,250) 538 clocks per trial factor.  This is quite promising.  I haven't
> measured it but the current code can test one trial factor in about 4000
> clocks.

My scheme has 17 levels so the upper bound of the _average_ 
time per trial factor should be 17 * 538 clocks. This is too darned 
slow but, as I indicated above, the actual execution time should be 
significantly less than that.

But I thought the time to actually test a 64-bit factor - not counting 
sieving overheads - was less than 2000 clocks ??? 
> 
> The stumbling block is how can we quickly do the first step of the inner
> loop - multiplying 156,250 bit trial factors together.  If someone has a
> clever algorithm that can do this in fewer than 3500 clocks per trial factor
> then we may have an improved factoring algorithm!

The other point I was trying to make is that a similar algorithm 
could be used to compute the product of small primes used by P-1 
stage 1 (and maybe stage 2 as well) with a highly significant 
improvement over the current method. There may also be an 
application in ECM.

Regards
Brian Beesley
Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Trial Factoring: Back to the math

2002-02-16 Thread bjb

On 15 Feb 2002, at 18:35, [EMAIL PROTECTED] wrote:

> The algorithm under discussion does share some superficial
> features with p-1, in that we do a bunch of a big-integer
> modular multiplies followed by a gcd to extract any factors.
> But in fact we have complete control over the factors that
> will be found, by way of the candidates we throw into the
> accumulated product. So it really is equivalent to sieve-based
> trial factoring, the difference being in the way we test the
> candidates.

Ah, now I see clearly what's going on.
> 
> >The other point here is that the GCD is _expensive_. On a 1GHz 
> >P3 or Athlon the GCD at the end of P-1 on exponent ~10^7 takes ~ 
> >10 minutes.
> 
> Ah, but that's (as Rich Schroeppel pointed out) the beauty of
> modular arithmetic. If we only need to do a single gcd, who cares
> if it takes ten minutes or even a few hours?

All right, let's assume that the cost of the GCD is zero, or as close 
to zero as makes no difference.

The difference between the new algorithm and "traditional" trial 
factoring is that, for each possible factor F that survives being 
sieved out, we calculate P(n+1) = (P(n)*F) (mod 2^p-1) and hope 
that, when we have incorporated sufficient possible factors, we find 
GCD(P(N),2^p-1) > 1 whereas the "traditional" method simply 
calculates 2^p (mod F) for each F in turn hoping to find the result 1 
indicating that we have found a factor.

So the viability of the new algorithm depends on whether we can 
compute P*F (mod 2^p-1) faster than we can compute 2^p (mod F).
The efficiency of the sieving is irrelevant, because if we can 
efficiently sieve out factors for one algorithm, we can use exactly 
the same sieving technique with the other algorithm.

When the average value of P is 2^(p-1), F ~ 2^64 and p ~ 10^7 this 
looks (to say the least) rather improbable. (Though see later in this 
message ...) The point here is that running P-1 stage 1 with 
B1=10^6 takes a while, but (a) there are in general going to be a lot 
more non-trivially-sieved-out factor candidates between e.g. 2^64 
and 2^65 than there are primes < 10^6, and (b) each multiplication 
certainly won't be faster - in fact we will probably be multiplying by 
a double- or triple-length scalar quantity, whereas 10^6 fits very 
easily into a single word.

Now, if we could find a "weak pseudoprimality" test - needs to be 
_much_ faster than Fermat's Test, since computing 3^(f-1) (mod f) 
is going to take longer than computing 2^p (mod f) - we could use 
this to enhance the screening of factors & so speed up trial 
factoring (_any_ implementation of it). Might there be mileage is 
_this_ idea? The test need not be very accurate, if it's fast enough; 
certainly we can afford to take a lot more "false positives" than 
Fermat's Test lets through.
> 
> If it became clear that we could implement this algorithm and
> have it run within factor of no more than 3-4 times slower than
> one-factor-at-a-time sieving for factors <= 64 bits, it would be
> very promising indeed, since it suffers no large performance
> hit for factor candidates above 64 bits.

I still don't see this clearly. Surely there is going to be a "large 
performance hit" when multiplying a very large integer by a scalar 
when the scalar jumps in size from N to N+1 computer words, and 
N is a small number (2 for 32-bit architectures, 1 for 64-bit 
architectures).

> And it would allow
> machines with poor integer multiply (e.g. the Sparc family)
> to join in the factoring work, since most of the work could
> use floating-point arithmetic, irrespective of the size of
> the factors.

Is this _really_ a problem, or is it likely to become one in the 
forseeable future? Trial factoring is well ahead of LL testing at 
present, and seems to be still pulling further ahead, despite the 
improvement in the relative efficiency of LL testing vs. trial factoring 
with fairly widespread deployment of P4 systems.

I would have thought that, in the event that LL testing did start to 
catch up to trial factoring, the first thing to do would be to reduce 
the initial trial factoring depth by 1, then run P-1 and only then 
proceed to do the last bit of trial factoring. In fact on P4 systems it 
might be more economical to stop trial factoring sooner than that.

...

Enough sceptisicm; here's a pointer to how thought triggered by 
this discussion could help improve the efficiency of other factoring 
methods:

It occurred to me that the operation (P * F) (mod 2^p-1) implicit in 
this proposal - and P-1 stage 1 - is messily unbalanced and could 
probably be improved. Here's a shot at it.

Note that, if F has an upper bound of 2^k, then F(a) * F(b) has an 
upper bound of 2^(2k).

Suppose we build a layered structure as follows. Each layer 
consists of two temporary storage areas R1(L), R2(L) and one flag 
f(L). The temporary storage areas in the bottom layer need to be 
big enough to hold the biggest value we might b

Re: Mersenne: Re: Are problems more likely in the last 1% of a 10,gigadigit LL?

2002-02-15 Thread bjb

On 14 Feb 2002, at 19:44, Michael Vang wrote:

> > I doubt George would be interested in working in a little simple zip
> > routine when saving/reading save files?  It might slow it down too
> much
> > for some folks (although honestly, it takes a fraction of a second to
> > zip using the lowest compression level), but maybe a nice option for
> > those looking to save a bit of space when testing large exponents.
> > Especially if you save interim files or have 2 saved files... the
> space
> > savings would add up quickly.
> 
> My vote goes towards leaving Prime95 just the way it is... I prefer to
> have a program do one thing well rather than have it try to do
> everything under the sun...

My vote too. It's so easy to "manually" compress files with a script 
if _you_ really think it's worthwhile.

FWIW how effective zip is depends on where the exponent is in its 
FFT run length size range. You get significantly better compression 
for exponents near the low limit than for exponents near the high 
limit. The reason should be fairly obvious.

FWIW bzip2 does a significantly better job of compressing save 
files than zip, but at the expense of using several times as many 
CPU cycles.

I'm doing a QA run on exponent 67108763 keeping interim files at 
million iteration intervals, so, by the time I finish, I will have 67 save 
files. The raw size is 14MB per file, but bzip2 reduces them to 
around 9MB. Whether we should be unduly worried about a job 
which takes ~ 1 year to run producing ~ 1GB of data, in these 
days when it's hard to buy new HDDs smaller than 20GB, is a 
matter of opinion.

FWIW Glucas save files are stored in packed integer format; the 
processing overhead seems to be reasonable & the raw files 
produced by Glucas are almost exactly the same size as a bzip2'd 
Prime95/mprime save file for a similar exponent. However even 
bzip2 will not compress Glucas save files further. The reason 
should be obvious - the information has to be stored somewhere, 
and lossy compression techniques are obviously of no use in this 
particular application!
> 
> I personally would like to see Prime95 go the other direction to a
> simpler CLI interface, like Mlucas, but I'm sure I'm in the minority...

My major criticism would be that, at the moment, there are some 
fairly important tweaks you can do by editing the ini files, but not 
through the menu system. It can be confusing as to what needs to 
happen to get a setting changed on a running system - does it 
happen automagically, do you have to stop & continue or do you 
have to exit & restart the program to take effect?

mprime is much closer to a traditional CLI interface. 
linux rules:)


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Trial Factoring: Back to the math

2002-02-15 Thread bjb

On 14 Feb 2002, at 20:00, [EMAIL PROTECTED] wrote:

> Rich Schroeppel <[EMAIL PROTECTED]> writes:
> 
> <<<
> The cost analysis of trial factoring by GCDs of 2^P-1 with the
> product of many small candidate divisors ignores an important
> optimization: All the candidates can be multiplied together
> mod 2^P-1, and ONLY ONE GCD NEEDS TO BE DONE. The major cost is
> probably the multiplications. Reduction mod 2^P-1 is particularly
> fast, but the same principle applies to random ultra-large targets.

Umm. Can someone please reassure me that we're not re-inventing 
P-1 stage 1?

The other point here is that the GCD is _expensive_. On a 1GHz 
P3 or Athlon the GCD at the end of P-1 on exponent ~10^7 takes ~ 
10 minutes. In that time you can try ~ 10^9 63-bit factors, even 
ignoring those which fall through the sieve.

Please don't let my scepticism put you off; you're quite right to say 
that advances only come from thinking the unthinkable; but, if there 
is significant mileage in this idea (and it isn't a straight rehash of 
P-1), then (in the words of HHGG) surprise will no longer be 
adequate, and I will be forced to turn to astonishment.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Are problems more likely in the last 1% of a 10,gigadigit LL?

2002-02-14 Thread bjb

On 14 Feb 2002, at 15:36, Alexander Kruppa wrote:

> Maybe we could have another option, "InterimKeep=x" or such, that only
> the last x files are kept?
> 
> Does Prime95 delete the interim files after successfully completing an
> exponent? 

InterimFiles was originally introduced to facilitate parallel testing & 
double checking for QA work, when there is a good chance that 
partial backtracking to the last agreed point will be required. That's 
why the interval is in terms of iterations rather than time. 
InterimFiles also logs the low 64 bits of the residue to results.txt at 
the same time (in fact it also logs the residue for the next two 
iterations, to cope with the fact that other LL testers count 
iterations differently).

For QA work it is absurd to suggest that the interim files are ever 
deleted automatically, so please don't make that the default option!

> If not, that might be enabled with another option, or by some
> logic that cleans up files in excess of InterimKeep even if they are of
> a previous exponent.

Easy to do, with a trivial bit of scripting.

Get a directory listing of the interim files - p???.0?? will match 
interim files written by Prime95 or mprime & is not likely to match 
any other files in the working directory (unless you have _very_ 
sloppy working practises!)

Sort it into ascending date order & cut off the bottom N lines.

Any files left in the listing are the ones which should be deleted.
> 
> This should keep disk space usage unter control while allowing a good
> chance of backtracking to a good residue in case of an error.

So will a normal daily incremental backup regime.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Are problems more likely in the last 1% of a 10 gigadigit LL?

2002-02-14 Thread bjb

On 13 Feb 2002, at 16:39, George Woltman wrote:

> ***NOTE:  There is an important lesson to be learned here.  All testers of
> 10M digit numbers should backup their save files regularly!!  You don't want
> a hardware glitch, disk crash, etc. cause you to loose months of work.

Same applies to LL tests, or even DC assignments, running on 
slow systems - these can take months too.

Is there _ever_ an excuse for _not_ backing up recently created / 
modified files - which obviously includes Prime95/mprime save files?

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Are problems more likely in the last 1% of a 10,gigadigit LL?

2002-02-14 Thread bjb

On 14 Feb 2002, at 0:47, Russel Brooks wrote:

> George Woltman wrote:
> > ***NOTE:  There is an important lesson to be learned here.  All testers of
> > 10M digit numbers should backup their save files regularly!!  You don't want
> > a hardware glitch, disk crash, etc. cause you to loose months of work.
> 
> How about a Prime95 option where it makes a daily backup for you,
> saved to a datestamp fileid?  It could save them to a subdirectory
> with the exponent name.  That would make it easy for the user to
> do a cleanup occasionally.

Look up "InterimFiles" in undoc.txt

This is a highly convenient method of getting backups.

It's also easy enough to have a scheduled daily job move these 
checkpoint files to an archive directory & throw away the oldest 
ones just leaving the last N. Well it's trivial on a linux system, I 
guess it's easy enough on a windoze system - especially if you 
already have a task scheduler running (e.g. because you installed 
Norton Antivirus).

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Preventing "hacks"

2002-02-12 Thread bjb

On 12 Feb 2002, at 13:21, Aaron Blosser wrote:

> After long and hard thought on this (approximately 30 seconds), I have
> the following suggestion:
> 
> Each team account (could apply to accounts with just one machine as
> well) should have 2 passwords.
> 
> A master password that could be used on the web pages to manage
> exponents on all team machines, and also a per-machine password (could
> be automatically generated when a new machine gets an exponent).

That sort of works - but it's messy, and makes it hard for an 
individual team member to unreserve an exponent for some 
legitimate reason.

A better solution is to have every PrimeNet client identified in three 
ways: system id, user name & team name. Team name blank 
means the user is not a participant in any team. The password is 
associated with the user name, not the team. Now the user can do 
what the hell (s)he likes with his/her own assignments, but cannot 
bugger up assignments belonging to other team members.

A side effect of implementing this is that team members can desert 
(maybe joining a different team) even in the middle of an 
assignment, so team total CPU time could not be computed by 
simply adding the CPU time contributed by current members. 
Instead it would be neccessary to keep seperate running totals for 
each named team, adding the contribution from each completed 
assignment to whichever team the user is currently attatched to 
(instead of, or as well as, to the individual user?) as and when 
results are submitted.

> > In late January, one of the more productive teams was hacked.
> > Prime95/Primenet has some security holes.  One of these holes
> > is that a team must make its password public for new members to join.
> > 
> > Someone exploited this hole.  This loser thought it would be "cute" to
> > unreserve all the team's exponents (a few hundred) via the manual web
> > pages.  Brad & Scott patched the manual forms and embarked on
> > implementing a more permanent solution.  A week ago, they struck again
> > using prime95 itself to again unreserve some of the team's exponents.
> > 
> > Unfortunately, rather than hurting the team, the hacker ended up
> hurting
> > ordinary users.  The server reassigned all the unreserved exponents.
> > Since the team's computers had a head start on these exponents they
> are
> > likely to finish them first.  When they report a result, your
> assignment
> > will
> > "disappear" from the active assignments list.  GIMPS, of course, can
> use
> > your result for double-checking.

So there's no loss at all, for LL assignments.
> > 
> > Brad/Scott have now changed server so that none of this team's
> exponents
> > can be unreserved.  They are still working on making this feature
> > available
> > to all teams to prevent this in the future.

As I pointed out above, there may be legitimate reasons for an 
individual team member to unreserve their own assignments.
> > 
> > Brad & Scott are better able to comment on this, but I think that this
> is
> > the first hacker attack on the reservation system.  There have been
> many
> > denial of service attacks and attempts at defacing the web pages
> (don't
> > people have better things to do with their time?)

I think _every_ web site sees attempts to do such things. Some 
morons apparently consider operational, undefaced web sites in 
the same way as graffiti artists see a blank wall. Expect also to 
see sustained probing to find any of the large number of known 
vulnerabilities in software and/or insecure misconfigurations 
common to various web servers.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: RE: Factoring top 10

2002-02-12 Thread bjb

On 12 Feb 2002, at 12:41, Aaron Blosser wrote:
> 
> Have the predictions on the work eliminated by P-1 factoring been pretty
> much confirmed by the # of large factors found?  In other words, is the
> extra processing time paying off?
> 
> I'd hazard a guess that the time saving is indeed appreciable, but I
> wonder if anyone has done some cold hard stats on it.

Small sample, just my current account report:

15 factors found - 11 trial factoring, 4 P-1 on LL assignments, 0 P-1 
on DC assignments
49 LL assignments & 42 DC assignments completed - almost all of 
the LL assignments and about half of the DC assignments most of 
these have included the P-1 factoring phase.

So it looks as though running P-1 on DC assignments has been 
wasteful, but on the other hand one shouldn't expect to get many 
"successes" with a predicted factoring rate of around 2% and a 
sample size of only about 20.

Conversely, with a predicted factoring rate of around 5%, I've found 
almost twice as many factors as expected when running P-1 prior 
to LL assignments.

On balance, I'm beating the odds - the factors I've found by running 
P-1 prior to LL & DC assignments have saved about 3.5 times the 
amount of time I've put into running P-1.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Trial Factoring: Back to the math.

2002-02-08 Thread bjb

On 7 Feb 2002, at 15:02, Bruce Leenstra wrote:

> All,
> 
> What this list needs right now is a nice juicy math debate, so here
> goes: I was reading the faq about P-1 factoring, and it talks about
> constructing a 'q' that is the product of all primes less than B1
> (with some multiples?) And then I read the "skip 2, test 2" comment
> about trial factoring and wondered:
> 
> Would it save any time to literally "skip 2, test 2" in trial
> factoring, by multiplying the two factors? (Does trial factoring use
> the FFT to do its squaring? The comment about breaking it into 32 bit
> chunks implies not. But if so, the modulo is free and this question is
> questionable : )

Ummm - there is a _big_ difference here.

The candidate factors for trial factoring are of the order of 2^64. For 
numbers of this sort of size, FFT is actually very expensive in 
comparison with "schoolboy long multiplication". The fact that we're 
always squaring allows us to get away with 3 inner products (the 
expensive operation) on a double-length calculation. Also, the 
modulus operation for trial factoring is modulo the candidate factor - 
not modulo 2^p-1 - this makes the "free modulus operation" totally 
irrelevant.

On a 32-bit system, when candidate factors are < 2^64, testing two 
together would make each squaring cost 10 inner products instead 
of 2 x 3, so it would clearly be expensive. For candidate factors up 
to 2^80, the cost would be 15 inner products instead of 2 x 6, so 
again testing two at once would be more expensive. This doesn't 
even start to take into account pressure on registers, which would 
slow the longer method down even more by forcing intermediate 
results out to memory. The other operations involved are at best 
proportional to the working length, so there's nothing to get back 
from those.

The other point is that it's rare that we'd want to test both 
candidates in a pair. Usually one or the other would be sieved out 
rather than tested; the proper factors we're looking for are 
themselves prime, so we can straight away exclude any 
candidates which are themselves divisible by a small prime.
> 
> Right now Prime95 constructs a list of possible factors, filters it
> (before hand, or on-the-fly) and then uses the powering algorithm to
> calculate 2^p - 1 (mod q) for each factor.  If p is 24 bits, the
> powering algorithm requires 24 squarings, from 1 to 24 doublings, and
> 24 comparisons -- resulting in up to 8 to 10 modulo's (not sure what
> the average is).

Eh?

We can straight away "write down" 2^n mod q when 2^n < q. This 
saves about 6 whole loops through the code.

However we need to do a full modulo operation after each squaring.
(Unless we're prepared to let the numbers double in length, which  
is inefficient, as I've demonstrated above). Doubling doesn't need a 
full modulo operation, compare magnitude & subtract is sufficient. 
Comparison of two multi-precision numbers is _very_ cheap 
compared with multiplying, taking modulus or even adding - in fact, 
just one single length compare is often sufficient.

One thing that did occur to me is that, at 2^64 where we're forced 
from double to triple word operations, with a consequent efficiency 
loss, it might be worth deepening the sieveing so that more 
candidates are rejected without having to be explicitly tried as 
factors. The deeper sieveing would cost more, but rejecting more 
candidates would regain that expense, up to a certain point.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-07 Thread bjb

On 7 Feb 2002, at 14:11, Daran wrote:

> > Nah, the candle is being burned from both ends. The point is that
> > the small ones _are_ being poached. If you work in the same order as
> > the poacher, work on lots of exponents will be replicated. If you
> > work in the opposite order, only one exponent will be accidentally
> > triple-checked.
> 
> A solution, then, is not to do small exponants at all.

This suggestion is impractical as well as mildly facetious. If you 
have clients running 24x7 with automatic PrimeNet comms (the 
least intrusive method, from the point of view of the user) then you 
get new assignments at "random" times - the client checks how 
much work it has left every 65536 iterations. So you are liable to 
get small exponents from time to time.

If you use manual client checkin to PrimeNet, or manual testing 
forms, then you _can_ pick a time of day when you are _unlikely_ 
to receive small exponents - but even this is no guarantee.

Mary deliberately got a big bunch of small exponents when they 
became available, presumably because she felt she would be able 
to clear them in a reasonable time on a reasonably reliable system 
(or systems). In this respect she's actually acting with the same 
motivation as the poachers, but attempting to stay within the 
cooperative spirit of the project. It is for this reason that she is 
rightly upset about being poached.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-04 Thread bjb

On 4 Feb 2002, at 11:52, Robin Stevens wrote:

> On Sun, Feb 03, 2002 at 08:06:27PM -, [EMAIL PROTECTED] wrote:
> > As for Mary's current predicament, with that block of small 
> > exponents you have, my best suggestion is to reverse the order of 
> > the lines in worktodo.ini so that the _largest_ ones are done first. 
> > Intermediate checkins to PrimeNet will then cause unstarted 
> > assignments which are no longer neccessary to be automatically 
> > removed.
> 
> Of course if one _keeps_ doing this, then the smallest exponents never get
> processed (and perhaps they become more likely to be poached) :-)

Nah, the candle is being burned from both ends. The point is that 
the small ones _are_ being poached. If you work in the same order 
as the poacher, work on lots of exponents will be replicated. If you 
work in the opposite order, only one exponent will be accidentally 
triple-checked.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Work being wasted

2002-02-03 Thread bjb

There are two points of view here. They seem to be more or less 
irreconcilable. This argument repeats on a regular basis.

I have to say that:

(a) I agree completely that having an assignment "poached" is 
annoying to extremely offputting, especially for a relatively new 
user;

(b) I don't see any harm in the occasional triple-check. In fact 
completing these assignments does give some information of 
limited statistical value relating to the error rate.

Personally I feel it will never be possible to eliminate the problem 
entirely. After all, we're all volunteers; as such, firm application of 
discipline is not a realistic option. Probably the best way to rectify 
the situation is to make sure that when an unexpired assignment 
completion is notified by someone other than the person/team who 
owned the assignment;

(a) PrimeNet credits the person/team who owned the assignment, 
rather than the person/team reporting completion;

(b) in the case of a double-check assignment, or when a factor is 
reported, PrimeNet notifies the person/team who owned the 
assignment so that unneccessary waste of effort can be avoided. 
This is one way in which the email address associated with each 
user could usefully be employed - waiting for a routine check-in 
leaves effort being wasted in the interim.

If (unlike Mary) you're running a _first_ test which appears to be 
"poached", the probable reason is that the person who had the 
assignment before you has reported their result after allowing it to 
expire. You are _not_ wasting time by completing the assignment 
as a double check run will eventually be required - _unless_ a 
factor has been found!

As for Mary's current predicament, with that block of small 
exponents you have, my best suggestion is to reverse the order of 
the lines in worktodo.ini so that the _largest_ ones are done first. 
Intermediate checkins to PrimeNet will then cause unstarted 
assignments which are no longer neccessary to be automatically 
removed.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: further code optimisation using a recompile?

2002-01-26 Thread bjb

On 26 Jan 2002, at 14:45, John R Pierce wrote:

> > http://www.slashdot.org has a link to http://open-mag.com on a new Intel
> > compiler for Linux an M$ Windows. The new compiler makes use of the new
> > instructions in the Pentium III and IV. Of course, the most important
> > part of the Prime95 code does not get compiled at all, since it has
> > already been handcoded. But it would be nice to know if a recompile with
> > this compiler would improve throughput significantly, anyone?
> 
> I believe the Intel C 5.0 compiler is based on Kai C++, which is hardly new.
> Its also $500 per user per system on Linux.  The MS Windows version requires
> you to already have the MS Visual C++ 6.0 package, as this piggybacks on the
> MS C tools.   This isn't going to take the open source world by storm...

In any case it is rather doubtful that the compiler could make much 
difference to Prime95. The pieces coded in C are of the "run once" 
variety rather than being executed in every iteration.

Actually Microsoft C/C++ (Visual Studio) does a pretty good job of 
outputting efficient code - as good as gcc on IA32 hardware. The 
only "extra instructions" in PIII which are of significant value for 
Prime95 are cache prefetch instructions, which are already 
exploited in the existing assembler code. And Prime95's P4 code 
already uses SSE2.

I'd put significant money on George's code beating _any_ compiled 
implementation of the same algorithm for speed on any applicable 
hardware. (Obviously excluding emulated hardware environments - I 
don't know if Prime95 could be run in e.g. a Bochs virtual x86 
environment, but, if it did, I guess it would be _very_ slow!)

Where such a compiler would make a difference is in porting code 
to new or substantially different architectures; Glucas is already 
pretty good on IA32/linux, but a compiler "upgrade" might help get 
it a bit closer to Prime95. The difficulty with porting Prime95 to non-
IA32 architectures is that so much of it is in assembler, which is 
not easy to port between architectures whilst retaining something 
approaching optimum efficiency. In fact it would be pretty much a 
total rewrite job.

As John points implies, you've got to be pretty committed to shell 
out ~$500 per system for the privelege of compiling code on your 
own hardware. It would take a _really_ significant speed boost to 
make that sort of expenditure worthwhile.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Slaying Cdplayers for prime95

2002-01-16 Thread bjb

On 16 Jan 2002, at 2:13, Daidalos wrote:

> 
> As the current PFC is still going on, and following the experience of
> the list, I took my beloved computer to a friend's computer store, in
> order to instal a video card, and thus achieve the speed arranged by
> great Apollo for a decent P3 at 850. (PFC: Period of Fat Cows).  And
> despite a warning in the back of my head, I only carried the computer,
> leaving everything else home.
> 
> I wasn't there whem the technician was trying various video cards to
> find the fastest combination. But he told me he didn't see the
> improvenment I told him he should expect in the speed of prime95.

Changing the video card won't change the speed of Prime95 much. 
Prime95 is really dependent mainly on the speed of the CPU, with 
memory bandwidth also a primary consideration. [This is why you 
don't get as much improvement as you'd naively expect from just 
upgrading the processor - the memory bandwidth remains fixed.]

Adding a video card - _any_ video card - to systems with integrated 
graphics may well speed the system up by approximately 10% by 
removing the graphics load from the main memory bus to the 
dedicated AGP bus. Adding an AIMM memory module has the 
same effect - Kingston do a 4MB AIMM for about $5; if you have a 
mobo using integrated graphics, this is probably the best value 
upgrade for your system.

> Maybe he didn't understand well what to look for, but the result was
> that I brought back the computer without a video card.
> 
> But alas!  The stupid box has Win Millenium in it, and if it is
> started without all its devices in place, it re-configures itself
> automatically.  The result is that HE has now the stupid impression
> that there is no CDplayer.  (one day I may tell you how you can tell a
> Microsoft programmer with a flat tire).
> 
> There is no IRQ conflict, but of course there is a change from the
> previous IRQ set.  When the computer starts, Cdplayer light stays on
> continuously.
> 
> Well, if I continue playing around a few days (or, ...weeks? months?),
> I may eventually find a way to convince my box that there is still a
> CDplayer.  But since I got into this trying to speed up prime95, maybe
> I am entitled to _some_ technical help?

I assume that you have a video subsystem of some sort, else you 
probably wouldn't be able to boot Windows at all - linux _will_ run 
on headless systems, but not Windows. In any case, you're going 
to need a display and console input of some sort (keyboard and/or 
mouse) to carry out the setup.

First of all, switch off & unplug the system & check that the CD-
ROM drive is actually connected to the system. It should be 
connected to both an IDE ribbon cable and the power supply, and 
the connections (including the mobo end of the ribbon cable) 
should be firm. Try pushing them in a bit further - if a connection 
tightens at all, it might not have been making proper contact.

Secondly, check that the IDE cable is connected the right way up - 
sometimes they will fit either way round - keyed connectors 
_should_ be, but aren't always, used. The correct way round is with 
the red marked edge of the cable at the pin 1 end of the socket. 
Check both drive and mobo ends of the cable. CDROM drives 
usually have the socket orientation stamped onto the metal plate 
just above the socket; the mobo socket should have a small "1" 
printed adjacent to one end of the socket. A torch and a magnifier 
may help to read the writing on the mobo.

If the CD-ROM shares its IDE cable with another device, check that 
one is set as "master" and the other as "slave". (Or, _both_ are set 
to "cable select" on some systems e.g. Dell).

If all appears OK after inspection, check the BIOS to make sure 
that the IDE devices are all set to "auto detect". If the CD-ROM is 
connected to a disabled controller port, it won't work!

If all that is OK, Windows should see the CD-ROM drive. If it _still_ 
appears to be missing, try running the "Add/Remove Hardware" 
Control Panel applet.
> 
> Related to this, I assure you that I will not use the CD to hear
> music, and thus slow down prime95.

The CPU power required to play audio CDs is very low.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: variable iteration speeds

2002-01-16 Thread bjb

On 15 Jan 2002, at 22:00, Robin Stevens wrote:

> On an otherwise idle Linux system, I've been noticing that the
> per-iteration speed has been varying during the course of a single
> primality test.  Until Saturday afternoon I'd been getting a fairly
> consistent 0.188/0.189s time when the system was idle.   It then increased
> to around 0.206s.  It fell again last night to around 0.199s (see logs
> below - percentages trimmed for 80-character neatness).

With stop/start or system reboots between?

Some systems appear prone to variability of a few percent from 
program start to program start. Starting mprime immediately the 
system boots is usually very good unless the boot included actual 
partition checks (either forced, or because the partition check 
interval has expired). When the assignment finishes, the new 
assignment almost invariably starts at the fastest speed the 
system will ever run at. 

In my experience, the speed doesn't change much however long 
mprime is left running the same assignment, but stopping and 
restarting can make a significant difference. It might be worth trying 
that if you find that it's running slow for some unknown reason. 

Probably the reason for the variability has something to do with the 
way the program code and data get assigned to physical memory 
locations. BTW Prime95 behaves similarly under Windows.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: just some comments to improve our site

2002-01-16 Thread bjb

On 15 Jan 2002, at 23:26, Torben Schlüntz wrote:

> Why is P-1 factoring not a single schedule task? like the LL and the
> Trail Factoring?
>  
> Why is P1 factoring hooked upon the LL test?
> Why does P1 not have it's own life like the TF and the LL?
>  
> I realy hated the P1 until now 21.4 fixed that.

I think George has answered those more than adequately.

> And I hated the
> CPU-clocks of earlier versions to, because I have no idea what so ever
> the clock beat of my computer is, but I can relate to time.

Ah, but the raw clocks give you a lot of information about the 
relative efficiency of different systems. In any case, the "clock 
time" per iteration is calculated from the raw clocks and the CPU 
speed (as configured in local.ini - not the actual CPU speed!) and 
may therefore be misleading - e.g. when a system is running in 
thermal throttled mode, or a Speed Step (tm) processor is running 
slow because the system is operating on battery power.
>  
> Some people might have "plenty" of mem - outdoing my best a 512M - but
> some of the machines I (or maybe Primenet) have selected for P-1 have
> nothing more than 64 or 128M.

Which is absolutely fine, unless you're running P-1 stage 2 on 10 
million digit exponents - in which case 128 MB is tight, though still 
feasible. I actually ran P-1 stage 2 on 40025087 on a system with 
128 MB (for lack of anything better at the time); it was _very_ tight 
but did complete. (Yes, it would have run faster with more memory.)
>  
> We also need a a place for rapid starters. Some "Gazelle" view. 
> Wow! Even though I've only done 0,323 P90 years I'm number 33 in this
> week I will certainly continue, because I will catch up with those
> guys having 79 years
> Hmmm, maybe in percentage of the week before.

I'm not sure what you mean by this.
>  
> Also the newsletter should more often be sent. We make progress every
> day so why don't we tell GIMPS'ers what is happening? Even a small
> progress is a giant leap, like "now all numbers below 4.900.000 has been
> doublechecked" or "All numbers below 16.000.000 has been trial
> factorized".

The web page http://www.mersenne.org/status.htm already carries 
this sort of information, updated approximately weekly.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: slaying cpus with prime95

2002-01-14 Thread bjb

On 14 Jan 2002, at 14:45, Steve Elias wrote:

> 1 - i just got my wife's toshiba laptop back from toshiba warranty
> service.  running prime95 for ~6 months on it caused the fan to die,
> and then the laptop would overheat & shutdown even without prime95
> running.  apparently the heat caused lots of disk badblocks too.

I've had a Tosh Sat 4070 laptop running mprime 24x7 for about 3 
years. The fan runs - constantly. 

A fan certainly shouldn't die within one year, though some will 
undoubtedly do so, due to manufacturing faults.

As for bad blocks on disk - I wouldn't trust a diagnosis made on a 
system with a cooked processor. Even if the disk has developed 
read problems on some blocks written when its temperature was 
way too high, reformatting (or block repairing, using a suitable 
utility) the disk when it's running properly cooled should repair them.
> 
> 2 - my manager at work here had a thinkpad.  he ran prime95 despite my
> worry that it was very "tough" on laptops.  within a few months his
> harddrive failed - possibly due to months of excess heat...  :| this
> could be considered a classic Dilbertian CLM (career limiting move) on
> my part, but no worry since my manager is super-cool.

Blame power management. HDDs constantly starting & stopping 
are going to age rapidly. Prime95 helps accelerate this process by 
writing save files every so often. Changing the disk write time to 1 
hour or so seems reasonable on a laptop since the main reason 
you write save files is to protect against power loss; all laptops 
effectively contain a UPS, and should suspend without data loss 
even if the battery runs flat.

I'd reccomend:
(a) configuring power management to run the hard disk constantly 
when the system is on mains power, to extend the disk life;
(b) configuring Prime95 to suspend itself when the system is 
running on battery power. This reduces HD start/stop activity and 
extends the usable run time by allowing the floating-point unit to 
shut down, reducing both power consumption and the cooling 
requirements considerably.
> 
> 3 - i also ran the prime95 app for a year or so on an ancient cyrix
> p120+ which had a cpu-fan that stopped.  after a couple months of
> no-cpu-fan, that cpu died completely...  

No surprise at all. Undercooled processors will die sooner rather 
than later, whatever you run on them. Meanwhile they're probably 
not running reliably, so whatever data you are processing aren't 
reliable.
> 
> 4 - i bought a 2Ghz P4 recently.  despite initial worries that it was
> running too hot (70 C) because fan was too slow (2800 rpm), i got
> adventurous and clocked the cpu at 2.1 Ghz for a day.  weeks later the
> machine started acting very badly (motherboard cpu temp alarm caused
> shutdown @ 90 C even without prime95 running).  so i returned it to
> the vendor.  they claimed that my overclocking it broke the P4, and
> that the top of the cpu was actually burnt/blackened from the heat.
> this is counter to my belief that improper fan/heatsink was the cause,
> but i can't prove it.  also it runs counter to what i've read here &
> elsewhere about the thermal-protection built into P4s 1.7Ghz or
> faster.  they are returning the P4 to intel to see if Intel will
> replace it for free, but in the meantime i have to pay for a new cpu!
> (i'm picking 1.8Ghz this time.)

Again it's no surprise that, if you overclock a CPU and fail to uprate 
the cooling, the processor will run hot. The existing 0.18 micron 2.0 
GHz P4 is pretty close to the limit of what can be done in the way 
of transporting heat out of the die (that's why Intel are shifting to 
0.13 micron technology). 

Experience in the past has always been that the low-end chips in 
any particular family stand overclocking better than the high-end 
chips, though I'd be very surprised to find a x GHz rated chip 
overclocked to 1.2x GHz performing more stably than a 1.2x GHz 
rated chip of the same type.

Intel's warranty seems to state very clearly that the warranty only 
applies to chips operated within the stated limits, so failure to 
honour a warranty claim on an overclocked CPU would be quite 
reasonable.

Intel "retail box" processors use a larger than normal fan, so a 
slower rotational speed is reasonable & will shift the same amount 
of air. The reason this is done is that aerodynamic fan noise 
depends on a large power of the air speed, so moving a larger slug 
of air more slowly is quieter than moving a smaller slug of air more 
quickly.

70C sounds too hot to me. But I don't know the exact way in which 
the sensor is incorporated into the P4 die. There should be a 
document on the Intel site showing the thermal constraints for each 
variant of the P4.

Motherboard at 90C is undoubtedly fatal! Was this really due to a 
processor cooling problem, or is there a runaway voltage regulator? 
(You didn't overload the voltage regulator during your overclocking 
experiments...?)

BTW it's fairly common for thermal tape or cheap therm

Re: Mersenne: slaying cpus with prime95

2002-01-14 Thread bjb

On 14 Jan 2002, at 12:18, Mary Conner wrote:

> Laptop fans don't seem to be very durable.  

Fans in general? I've had very little trouble with CPU fans 
(fortunately - and perhaps I'm lucky) but I find failed PSU fans and 
noisy (vibrating) case fans to be just about the commonest faults 
on desktop systems. I have a lot less to do with laptop systems, 
but so far (touch wood) I've no experience at all of laptop fan failure.

> I have a two year old Dell
> that has apparently had a broken fan for a long time, and I didn't notice
> it because the it (either the MB or the Celeron) throttles down
> automatically but continues to work and nothing I was doing was that CPU
> sensitive that I would notice the lack of speed.  This is actually kind of
> neat because it leaves the computer still usable (and even at top speed
> with an external fan blowing in the intake port), but I do wish it would
> popup a notice that the CPU was being throttled.

My Tosh 4070 makes it quite clear that the fan is running - it's 
LOUD, several decibels more so than the average desktop. Small 
diameter fan running at high speed = fast air flow = aerodynamic 
noise.

Many ancient laptops had a two-colour LED which shone green for 
full CPU speed, red for reduced CPU speed (though you had to 
select the CPU speed manually, using software or by a special key 
combination). The extra manufacturing cost of fitting this to current 
systems would be very small, though the introduction of 
technologies like "Speed Step" would complicate matters to some 
extent. Possibly a tricolour system would be needed. C'mon, 
Toshiba et al...

Anyway I guess that's another reason for running Prime95 - you 
should notice a performance loss if your processor drops into a 
throttled mode for some "mysterious" reason!
> 
> Yes, I do not think most laptops are very durable.  Companies know they
> will usually get a lot less use than a desktop, so they know they only
> need to get a few months of actual runtime to get them out of the warranty
> period.  I think it has more to do with simply using them 24/7 (or more
> than a few hours a day on average) than with prime95.

Intermittent operation costs reliability - if parts are constantly being 
warmed & cooled they _will_ age. Also the average laptop gets 
dropped, rained on, moved from warm to cold environments, etc. 
etc., far more than the average desktop system does.
> 
> Thermal protection for P4's is partly a matter of the chip, and partly a
> matter of the motherboard/chipset.  Intel built motherboards have been
> shown to be very paranoid about throttling the CPU compared to boards
> built by other vendors (and with non-Intel chipsets). 

Intel's BIOS doesn't even give you the choice of overriding the 
factory default throttling mode / thresholds :(

> As far as the CPU's
> own thermal protection, Intel took some heat for the fact that the thermal
> protection was causing performance problems compared to AMD chips.

This should not be a problem - provided it is actually possible to 
get the heat generated out of the chip without tripping thermal 
protection, with a properly engineered and assembled heatsink / 
fan / case ventilation system.

> There
> was a rather famous test done by Tom's Hardware Guide where they removed
> the heatsink from test chips while they were running to simulate a
> catastrophic failure of the heatsink falling off the CPU.

Ever heard of a heatsink _accidentally_ falling off a CPU on a 
running system? Sounds about as probable as an airline 
passenger being killed in flight by a meteorite strike which doesn't 
bring down the whole aircraft.

I thought the point of the THG "test" was to educate self-builders 
using Athlon/Duron not to test a system by applying power - even 
for a few seconds - if the CPU heatsink wasn't fitted.

Anyway, the latest AMD processors have thermal protection 
against this sort of abuse.

> The P4
> throttled perfectly with a huge performance degradation, but it kept
> going.  So it does seem odd that your chip would have "burned", but
> perhaps Intel backed off on the thermal diodes in the chip itself because
> of the performance problems.

I doubt it. If they were really worried they'd just have disconnected 
the sensors. There seems absolutely no point at all in fiddling 
around with thermal protection if you leave yourself with a product 
which is subject to both thermal performance degradation and rapid 
failure due to moderate overheating.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Optimizing P4 usage

2002-01-14 Thread bjb

On 13 Jan 2002, at 21:37, Gerry Snyder wrote:

> At home I have 2 PC's, both of which devote their spare cycles to GIMPS.
> One is a 1.3 GHz P4 running W98, and the other a dual 1 GHz P3 running
> linux. One of the P3's is doing LL testing, and the other is doing
> mostly ECM, with a little trial factoring thrown in.
> 
> The only part of Prime95 that uses the SSE2 P4 instructions is the LL
> testing. Because of the huge speedup this gives, I would like to keep
> the P4 machine doing nothing but LL tests. It is now about a month away
> from finishing its first 10 megadigit candidate.
> 
> My question is whether it is worth the trouble to shift the trial and
> P-1 factoring of the next one to one of the P3 processors (the non-LL
> one). It might lead to one extra LL test in two years.

You can do even better than that:

Suppose the three processors are as follows:

A wants to run trial factoring - if you run LL or P-1 on this, B slows 
down due to memory bus contention
B doesn't mind running P-1
C wants to run LL only.

Firstly you want to make sure that C doesn't have 
SequentialWorkToDo=1 in prime.ini (else TF & P-1 will be run 
before the current LL completes). Also make C's DaysOfWork 
greater than the time estimated to run TF & P-1 on A & B.

Let C get a new assignment Test=,N1,P and let N2 be 
the required factoring depth for that exponent. Edit the new 
assignment in C's worktodo.ini, replacing N1 by N2 and P by 1.

If N1 < N2, trial factoring is required. Edit A's worktodo.ini file, 
adding a new line at the top Factor=,N1. Stop & restart 
mprime on A to get the assignment started. If a factor is found, 
remove the assignment from C's worktodo.ini file & let C get a new 
assignment. Whatever happens, let A report the result of the trial 
factoring.

If A fails to find a factor and P=0, P-1 factoring is required. Edit B's 
worktodo.ini file, adding a new line at the top 
Pfactor=,N2,0. Stop & start mprime on B to get the 
assignment started. Let B report the result of the P-1 factoring. If a 
factor is found, remove the assignment from C's worktodo.ini file & 
let C get a new assignment.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Preventing new assignments

2002-01-14 Thread bjb

On 12 Jan 2002, at 10:27, Paradox wrote:

> In about 10 days, I've got 5 Pentium4 computers that will be submitting
> completed LL tests (for 10,000,000 digit numbers). I used to have
> dozens of smaller computers working on such LL tests for years, and so
> I have a collection of Prime95/mprime directories which have
> 60% to 80% completed LL tests in them. I'm going to want to
> make sure that the currently running mprime's on the P4s do not get
> new assignments, so that I can simply remove those copies of
> mprime and replace it with a copy from my collection.
> 
> For now, I've set the "Days of Work" to 1. If I were to set
> days of work to 0, what would happen? How can I tell it to simply
> submit the results to primenet, disconnect, and then exit the program
> when it is done?

Well, if that's really what you want to do, why not try the "Quit 
GIMPS" option (bottom of Advanced menu)? In v21 (but NOT 
previous versions!) selecting this option allows you to complete the 
current assignment then quit (which is what you seem to want), or 
quit immediately (which you probably don't want to do since 
PrimeNet will think you have abandoned work before completing 
the assignments. You can still continue, however, & report the 
results as they come in).

If you already have new assignments which you don't want to run, 
they will automatically expire in about 3 months, or you could 
return them to the server using "Unreserve Exponent" in the 
Advanced menu (v21 only) or through the PrimeNet Manual Testing 
form.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: GIMPS on new iMac ??

2002-01-12 Thread bjb

On 11 Jan 2002, at 22:58, Russel Brooks wrote:

> How do you think GIMPS will run on the new iMac?
> (Or will it run at all?)

Is there any significant architectural difference between the new & 
old iMacs? I think not. The reports I've read about the new iMac 
say that it's faster (no surprise) & has radical styling, but these 
probably have no effect on its functionality.

So the answer is, no change, except that (if it runs at all on 
existing iMacs) it will run faster.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: 10 million digit overkill

2001-12-26 Thread bjb

On 25 Dec 2001, at 14:54, Steinar H. Gunderson wrote:

> On Tue, Dec 25, 2001 at 01:41:31AM -0500, Paradox wrote:
> >If the computer above could do each iteration in
> >0.001 seconds,
> >the amount of seconds required to complete the task would still be
> >significantly more than 4,000,000 digits. Thats incomprehensible.
> 
> Unless we find something more efficient than the Lucas-Lehmer test before
> that. :-)

Testing to see if 2k(2^p-1)+1 divides 2^(2^p-1)-1 for small values of 
k _is_ feasible in a reasonable time and _may_ result in a positive 
refutation of the theory.

As I (and others) have pointed out, LL testing on numbers of this 
sort of order is impossible (in this universe) because there simply 
isn't enough matter to build sufficient temporary storage- whatever 
speed we can manage to achieve.

This is an excellent example of the sort of theory which it _may_ 
be possible to refute but may never be possible to prove.

Theories based on samples involving "small numbers" of terms are 
always dangerous ... look, 3 is prime, 5 is prime, 7 is prime, 9 isn't 
prime but it does happen to be a perfect square, 11 is prime, 13 is 
prime ... it looks like all odd numbers which aren't perfect squares 
are prime ... or just look at the Fermat numbers ...

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P4 throttling

2001-12-23 Thread bjb

On 23 Dec 2001, at 12:44, Paradox wrote:

> I don't think I made the "Thermal Monitor" feature
> clear enough. From what I've read at Intel's
> website, I believe it is built onto the P4 chip,

Yes, most current processors have a thermistor on chip & some 
means of signalling the die temperature through the pin array.

> independent of the motherboard except that the
> motherboard has to activate it at boot. 

OK, I looked at the Intel documentation. (Specifically the document 
24988701.pdf which is the datasheet for the 478 pin P4.) The P4 
does have an "auto" thermal management mode _in addition to_ 
the "manual" mode similar to other current processors. The manual 
mode allows BIOS to set the threshold temp & throttle ratio (in 
12.5% steps), the auto mode fixes the threshold temp (at an 
unspecified value - possibly dependent on processor speed and/or 
stepping) and also fixes the throttle ratio ("duty cycle") at 50%.

Intel claim that "auto" mode is required to keep within spec but this 
is obviously garbage. The same paragraph states that thermal 
throttling will never occur in a properly designed system unless 
ventilation fails for some reason.

There is a pin signal to show thermal monitor active, and the info is 
also available in P4-specific processor registers.

One interesting point - the Intel processor technical spec states 
that, for the heatsink/fan provided with the retail boxed product, the 
temperature of the CPU fan inlet air should not exceed 40C (104F). 
If it does not exceed 40C, it is highly unlikely that thermal throttling 
is active. Measure the inlet air temperature 1/3 of an inch above the 
centre of the CPU fan.

> In any case,
> the BIOS menu that I have access to on a
> Intel D850GBC is not exactly verbose...it has nothing
> letting me change timing settings on any piece of
> hardware, and nothing about CPU throttling.

Yes, I've noticed that Intel branded mobos tend to have BIOSes 
with most of the interesting parameters masked out. That's one 
reason I tend to prefer mobos from other manufacturers - I like Abit 
& Asus.
> 
> And I definitely do not want to deactivate
> Thermal Monitor and simply ignore cooling
> problems...that would likely cause damage
> to my CPU.

You will get _severe_ instability (which will go away when the 
processor is properly cooled) _long_ before you get any sort of 
permanent damage. In fact the Intel P4 CPU (like most others, with 
the exception of Athlon & Duron except the new Athlon XPs) also 
has a temperature cutoff to protect the CPU from a failed cooling 
fan by shutting down completely if its temperature becomes 
dangerously high. If the trip is activated, the processor will stay 
down until reset. I've never seen a BIOS which allows this safety 
feature to be disabled.
> 
> For cooling, in addition to the power supply,
> I have a case fan in the bottom front that
> is pulling air in, a fan on the CPU heatsink
> (the one provided by Intel), and a case fan
> next to the cpu pushing air out. There is
> also a weak 5.25" HD fan pulling air in
> from the front.

That's very similar to the Athlon system with the reversed cooling 
fan I was having trouble with last week. The problem was that the 
PSU & rear case fans were conspiring to draw hot air into the CPU 
fan inlet :(

Adding an extra case fan dropped the case temp by ~10C without 
affecting the CPU temp by more than 1C. Presumably the part of 
the mobo where its thermistor is located was getting the benefit of 
the extra cooling. Taking that extra case fan out but reversing the 
PSU fan - so that all the case & PSU fans were blowing through 
the case from front to back - dropped the CPU temp by ~12C and 
kept the lower case temp.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P4 throttling

2001-12-23 Thread bjb

On 22 Dec 2001, at 18:55, Paradox wrote:

> As I email'd previously, I have an Intel Pentium 4  1.9 GHz with PC800 
> RAM in an Intel D850GBC motherboard running Windows 2000. Even with no 
> other programs running, and a minimal of services, (100% of cpu 
> dedicated to prime95), I get approximately 0.180 seconds/iteration. 
> According to the benchmark on mersenne.org, I should be getting closer
> to 0.143 seconds/iteration.

Interesting, that's almost exactly 80%.

> I suspect that my Pentium 4 may be engaging
> its built in "Thermal Monitor" feature, which is mentioned on Intel's
> website, without extensive documentation (as far as I can find). The
> Thermal Monitor is a temperature sensor inside the Pentium 4 which 
> detects when the processor is getting too hot and reduces system 
> performance in order to compensate.

Most BIOSes have a setup which allows you to change the throttle 
ratio (the realtive clock speed when throttled) and the threshold 
temperature at which throttling is applied. If you find a throttle ratio 
of 80% then you've got it. If you find the throttle ratio is set to 50% 
then the throttling theory is wrong, you'll have to find the cause 
elsewhere - try using the Task Manager to see what else is running.

> I have noticed
> using my motherboard's temperature sensor (different from the internal 
> P4 sensor), that Prime95 raises my CPU temperature from approximately
> 90 to 95 F to exactly 124 F, which is immediately below an indicated
> threshold. This makes me suspect that the P4's Thermal Monitor
> is engaged and could be causing the performance decrease. I have looked
> for, but have not found, a way to detect whether or not the Thermal 
> Monitor is infact being activated. I want to make sure this is the case
> before taking drastic actions on my CPU's cooling system. Any help would
> be appreciated.

124F = 51C ... sounds too hot for the system board sensor ... 51C 
may be OK for the CPU sensor, though. Mind you I'd expect the 
threshold to be at 50C or 55C rather than 52C. 

If your system board sensor really is reading 124F then you almost 
certainly have a problem with case ventilation. Is the case fan(s) (if 
fitted) running? Is the fan in the PSU running? You don't have a 
strange problem like the one I found last week, where the PSU fan 
was reversed, thereby blowing pre-heated air into the vicinity of the 
CPU cooler inlet?

If you can't spot a problem like that, try adding another case fan. 
Most good cases have fittings for relatively cheap (think $5) 8cm 
cooling fans. Normally you fit these so that they blow into the 
case; it is usually obvious if this is wrong i.e. the fan is mounted so 
that it is adjacent to the CPU in such a position that, if it were to 
suck rather than blow, it would be getting a high proportion of its air 
from the CPU cooler "exhaust".

If your case doesn't have fittings for an extra case fan, you can get 
a "HD cooler" (two or three 40mm fans in a plate which will fit a 
5.25" drive bay). Try one of those. You can also get extractors 
which fit an expansion slot; these are useful for cooling e.g. 
graphics cards, but also help keep the rest of the case volume a bit 
cooler.

Disabling thermal throttling by setting the throttle ratio to 100% or 
moving the threshold temp way up is also acceptable _provided the 
system passes the torture test_ after changing the parameters. In 
fact this _might_ be the best way to fix the problem, since your 
m/b may have a minor problem (possibly duff BIOS software - did 
you check for an upgrade?) causing throttling irrespective of 
sensed temperature. 

You might also want to try the torture test with the system clock 
temporarily wound up a couple of percent and/or a case fan 
disconnected; if the system is stable & error-free under those 
conditions, it should be OK in normal service. If you try this please 
watch the system carefully & return to normal if it shows signs of 
instability or logs more than a very occasional error in the torture 
test. Mild overheating will not cause permanent damage but 
prolonged, severe overheating might.

Seasonal felicitations
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Re: 2^4-1 Factored!

2001-12-21 Thread bjb

On 20 Dec 2001, at 15:16, Aaron Blosser wrote:

> It's when they start producing near instant factors of 1024 to 4096 bit
> numbers that we'll have to really think about our current encryption
> measures.
>  
> Even if they reduced the time it took to factor such numbers from
> millions of years to millions of seconds, the impact on cryptography
> will be huge.

Sure!
>  
> But I reckon that by then, someone will have thought of some new
> encryption that would remove the advantage that QC has. something
> unrelated to factoring large numbers, I'm guessing?  Dunno what that
> method might be; if I did know, I'd market it and get rich. :-)

Fortunately the method exists - outlined in some detail in Simon 
Singh's excellent "Code Book" is a quantum cryptography 
technique which is not only unbreakable (even with unlimited 
quantum computing power) but even allows the recipient to 
determine if the message has been eavesdropped (not that 
eavesdropping would do anyone much good!)

Incidentally only a couple of weeks ago there was an 
announcement that a key item of technology required to implement 
this system - a single photon emitter - has been demonstrated.

Quantum cryptography may in fact be already deployed in some 
government departments (the sort that "don't exist", of course).


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M4 completely factored!

2001-12-20 Thread bjb

On 20 Dec 2001, at 9:57, Henk Stokhorst wrote:

> L.S.,
> 
> Another milestone has been acomplished. M4 has been completely factored, 
> two factors were found, 2^4 -1 = 15 = 3 * 5.
> 
> more details at http://slashdot.org/  see: IBM builds a limited quatum 
> computer.

Wow! Are we _really_ sure they didn't find 2^4-1 = 3_+_5? You never know 
with this scary quantum stuff!

I expect the IBM team will find the next 100 Mersenne primes 
sometime before the end of this year. Oh well, I suppose there's 
always seti@home ;-)

Seasonal felicitations
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: number of primenet machines Was: Mersenne: CNET coverage of M39: Very good

2001-12-16 Thread bjb

On 16 Dec 2001, at 9:47, Henk Stokhorst wrote:

> at the time of the discovery the amount of personal computers checking 
> in results to primenet was around 20,000. I have no idea where the 
> 210,000 number came from.

If you read the original press release I think you will find that the 
210,000 referred to the _total_ number of systems contributing to 
_all_ Entropia projects.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Speed of RAM

2001-12-14 Thread bjb

On 14 Dec 2001, at 4:57, Philip Whittington wrote:

> I'm sorry to trouble GIMPSers with what might be more of a computer question. I 
> am using an Athon Thunderbird 1.3GHz and 256 MB of 133 MHz (DDR) Ram.
> 
> My computer has always emitted a high pitched squeaking noise when processes 
> like GIMPS start, and also when I do things like use the scroll wheel of my 
> mouse etc. Recently the computer started hanging randomly on certain tasks (not 
> GIMPS). After having taken the RAM down to 100MHz the high pitched squeak and 
> the hanging has stopped.

The most likely reasons for the "squeak" are:

(a) some sort of mechanical resonance, perhaps involving the 
heatsink/CPU coupling, driven by magnetic linking from the 
"undertones" of CPU activity. Unless the noise is LOUD this is 
most probably harmless.

(b) possibly the power supply is having difficulty providing sufficient 
current. Switched mode power supplies usually operate 
"supersonically" (around 50 KHz) but the frequency can be dragged 
down into the audible range by excess current draw. This would 
cause a squeal. Now a Tbird system does draw more current when 
the FPU is very active i.e. when Prime95 is running, but this should 
persist whilst Prime95 is running, rather than being transient as the 
program starts, as your message suggests. (?)

Excess current draw would tend to cause system instability, 
getting worse as the smoothing capacitors age rapidly due to 
running under heavy strain. If your PSU is less than 250W or not 
marked as "Athlon approved" (see the AMD website) then I'd 
reccomend changing it for a 300W+ AMD/P4 approved PSU.

Changing the system bus from 133 MHz to 100 MHz will 
underclock the system substantially (1.33 GHz will run at only 1 
GHz) which will have the effect of reducing the current draw. Even 
changing the memory bus from 133 MHz to 100 MHz leaving the 
system bus at 133 MHz will reduce the load since the FPU will be 
inactive some of the time - the memory bandwidth is in any case 
inadequate to keep the FPU fully occupied, cutting the memory 
bandwidth by 25% obviously doesn't improve matters!
> 
> 1) Will the hanging have any impact on the credibility of what PRIME95 finds - 
> I am around 86% through a L-L Test?

Might be OK. Try running the full selftest, or the torture test. If 
these show no problems (despite the system hanging 
occasionally) you should be allright. In any case there's not much 
point in aborting 86% of the way through. Suggest you switch to 
DC assignments (after finishing this one) until the problem is 
sorted, though.
> 
> 2) Can anyone shed light on whether it is my ram at fault or something on the 
> motherboard? (sorry, I realise this is not really GIMPS related)

More likely M/B rather than RAM. Some Abit KT7s have weak 
smoothing capacitors, though others seem to be fine. This may 
depend on having an adequate PSU and/or adequate CPU cooling - 
the caps which blow are those next to the CPU, which therefore 
run hot if the CPU is inadequately ventilated. You might want to 
check out the memory using Memt25 but I doubt you will find a 
problem.

If it is a PSU / smoothing cap problem, expect it to get worse, 
slowly at first but eventually getting so bad that the system won't 
boot. Repair is impractical; replacing the M/B will fix the problem 
temporarily, but it will recur, sooner rather than later, unless the 
PSU is upgraded.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Moving an assignment

2001-12-12 Thread bjb

On 12 Dec 2001, at 12:41, Mary Conner wrote:

> How would I go about giving an exponent I've been assigned to someone
> else?  From previous discussions, it seems as though PrimeNet would reject
> the assignment as not belonging to him, but I've seen exponents moved from
> one account to another without expiring, so I know it can be done.  I
> haven't been able to find any option on Prime95's menus, or anything on
> the web page about how to do this so that PrimeNet will accept the
> reassignment.

Simple - just let the other person run the assignment. The server 
will accept the result though it will probably mutter about 
"assignment not valid for this user", and CPU credit will not be 
awarded in the PrimeNet table.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: New exponents

2001-12-06 Thread bjb

On 5 Dec 2001, at 22:33, George Woltman wrote:

> > > No.  The server never contacts the client.  That's too much of a security
> > > risk in my book.

Correct. Though I suppose the server could automatically e-mail a 
warning to the user that their assignment has been pre-empted.
> >
> >That isn't exactly what I meant.  Given that his exponent has been expired
> >and assigned to me, if he then checks in later to report further progress
> >on the exponent, will the server tell his client that the exponent has
> >been expired and assigned to someone else, or will it tell me the next
> >time I check in that he is still working on it?

Surely _his_ checkin will result in "assignment does not belong to 
you" if he's allowed it to expire?
> 
> If he checks in his result, primenet will return error 11 - but your 
> computation
> will continue.
> 
> >Or do both of our clients continue happily chugging away on it?
> 
> I think prime95 removes it from worktodo.ini only if you have not
> started the LL test.

This makes sense... 
> 
> Obviously there is some room for improvement here.  The current scheme
> works OK for first time checks since yours would then become a valid
> double-check.

There are four points here:

(1) there's a Big Difference between abandoning a run which is only 
just started and one which is nearly complete;

(2) if you do decide to change the policy to "abandon", it would be 
wise to make the program keep the savefile (can be deleted 
manually if neccessary) just in case the residuals don't match - it 
seems silly to throw away work done already when it still might be 
required;

(3) IMO the occasional triple check doesn't matter;

(4) AFAIK it is fairly rare for people who allow assignments to 
expire _having made at least one intermediate checkin_ to later 
reappear with a result.

The current policy is fine, even ideal, so far as "first LL test" 
assignments are concerned. I'm not convinced that changing the 
policy for DC assignments would result in a noticeable 
improvement in the efficiency of the project as a whole.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: New exponents

2001-12-06 Thread bjb

On 5 Dec 2001, at 21:23, Nathan Russell wrote:

> However, when the client does contact the server (every 28 days by default, 
> IIRC), will it not get an "this assignment does not belong to us"?
> 
But the test will continue if it is underway. If it isn't the first entry in 
worktodo.ini the result will be that the entry is removed by the 
program when PrimeNet signals this back.

> I know that I had that happen while I had QA and primenet work queued on 
> the same machine, and in fact it happened often enough to be rather annoying.

QA is different - if the exponent is in currently active PrimeNet 
ranges you're stuffed, because PrimeNet can only cope with one 
person "owning" an exponent at a time & QA runs execute in 
parallel, whilst PrimeNet will completely disown exponents outside 
its current active ranges.

I run QA assignments with PrimeNet comms disabled. 


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-05 Thread bjb
On 5 Dec 2001, at 6:09, [EMAIL PROTECTED] wrote:

> Brian Beesley wrote:
> > On 3 Dec 2001, at 20:38, [EMAIL PROTECTED] wrote:
> [... snip ...]
> > > I think our record shows that a verified factor is still
> > > slightly (by a minute but nonzero margin) more reliable an
> > > indicator of compositeness than two matching nonzero LL
> > > residues.
> >
> > AFAIK our record does _not_ show any such thing.
> 
> Oh?  It doesn't?

There is no evidence of any verified residuals being incorrect.  Neither is there any evidence that any verified factors are incorrect.
Whatever theory states, the experimental evidence is that verified  factors are no more (or less) reliable than verified LL tests.

Suppose a taxi firm runs 10 Fords and 10 Hondas for a year. None  of them break down. On that basis alone, there is no evidence  whatsoever that one make is more reliable than the other.  Naturally, other companies' experimental evidence may vary.
>
> [ big snip ]
> There is a small chance that we may accept an incorrect factor even
> after double-checking it, but that chance is even smaller than the
> small chance that we may accept an incorrect double-checked L-L
> residual.

I doubt very much that we would accept an incorrect factor. The  double-checking is done with completely different code. Besides  which, checking a factor takes a few microseconds, whereas  checking a LL test likely takes a few hundred hours.

If anything goes wrong during a factoring run, we would be far more  likely to miss a factor which we should have found rather than vice  versa. This is relatively unimportant from the point of view of finding  Mersenne primes; the effect is a small loss of efficiency.

> How does that compare to the observed rate of incorrect factors
> discovered after triple-checking _them_?

AFAIK no-one bothers to triple-check factors. Nowadays factors  are verified on the server at the time they're reported. I'm not privy  to the server logs, so I simply don't know how many get rejected  (except for the server _database_ problem reported recently - not  related to the actual factor checking code - which causes problems  with genuine factors > 32 digits in length). However, I can think of  at least one way of pushing up the rejection rate.
> 
> How many of those problems caused errors during L-L verifications,
> and how many caused errors during factor verifications?
> 
All during LL first tests, or QA runs (which are LL & DC done in  parallel with intermediate crosscheck points).

> However, you may not have spent anywhere near as much time doing
> factor verifications as you have doing L-L verifications, so it may
> not be valid to draw any conclusion about comparative error rates on
> your system.

I've spent no time at all verifying factors - it would take less than a  minute to verify everything in the factors database. The total  factoring effort I've put in (ignoring ECM & P-1 on small exponents)  is only about 3% of my total contribution, so I would expect not to  have had any factoring errors. Besides which, trial factoring  _should_ have a lower error rate than LL testing, due to the lower  load on the FPU (which is usually the CPU element most sensite  to excess heat) and the smaller memory footprint (less chance of  data getting clobbered by rogue software or random bit-flips).


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: New exponents

2001-12-05 Thread bjb

On 4 Dec 2001, at 17:59, George Woltman wrote:

> >Case 1: I finish first, find a prime and announce my discovery. I did
> >the work but the exponent is assigned to you! Who gets the
> >credit???
> 
> You, get the credit.  User b will be mighty disheartened.  I know first hand.
> Slowinski's Cray beat my own Pentium-90 by just a few days in the discovery
> of M#34.

Ooof. I didn't know about that.
> 
> As to legal issues, the disclaimer section of the download page states:
> 
> We are not responsible for lost prize money, fame, credit, etc. should 
> someone accidentally or maliciously test the number you are working on and 
> find it to be prime.

The case I describe might not fall under the legal definition of 
"accidental" or "malicious". It would be possible for one or other of 
the parties to argue that they have been assigned the work by 
PrimeNet and that PrimeNet was therefore liable for lost prize 
money. On the "belt and braces" principle I think that the wording 
should be reinforced.

As to people working independently - I don't see how you can cover 
that one, since the independent party will not be covered by the 
disclaimer if they never downloaded any variant of Prime95. In that 
case it _is_ accidental that PrimeNet should assign work which is 
being replicated elsewhere without its knowledge.

Nowadays it's at least a reasonable assumption that people with 
interests in this field _would_ obtain assignments through 
PrimeNet. In the early days it would be much more likely that 
people were replicating each other's work without knowing it. 
Presumably such a situation is what led to your disappointment.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Mersenne Digest V1 #913

2001-12-05 Thread bjb

On 4 Dec 2001, at 20:36, Gordon Spence wrote:

> >I've triple-checked thousands of small exponents - some of the
> >ones where the accepted residual was recorded to only 16 bits or
> >less, which makes the chance of an undetected error _much_
> >greater (though still quite small) - so far no substantive errors in the
> >database have come to light. A very few (think fingers of one hand)
> >instances of incorrectly matched residuals have come to light -
> >completing the double-check in these cases proved that one of the
> >recorded residuals was correct.
> 
> Currently my team report cleared list shows 338 double checks and 12 double 
> checked factored including this monster

I'm not talking about missed factors. The database shows that all 
the small exponents ( < 1 million) have been factored at least a bit 
or two deeper than the "calculated optimum", so I haven't even 
been trying. I've found quite a few factors of these small exponents 
by running P-1, that's a different story.

My point here is that if we have database entries with at most one 
64-bit residual (so that the matching residuals depend on only the 
bottom 16 bits), so far when I've run a triple-check LL test the 
bottom 16 bits have always matched; indeed when there is already 
one 64-bit residual to compare with, my triple-check has always 
matched that.

The work I'm doing in this area is a marginally useful way of using 
systems which are rather too limited to do other work.
> 
> 6630223 87 DF 195139088771490335223859559 07-Apr-01 07:58 trilog
> 
> (In fact when it was checked in PrimeNet initially rejected it because it 
> was longer than this sort of check was supposed to find! Has anyone found a 
> factor bigger than 87 bits using Prime95?)

Interesting, though George's answer explains this. I wrote the factor 
validation code running on the server; I specifically wrote it to be 
able to handle factors of any size (subject to system memory at 
approx 8 bytes per decimal digit of the factor), with a rider that run 
times for extremely large factors (>> 50 digits) might be 
problematic, given that the server cannot afford to spend very long 
running each validation.

My record factor found using Prime95 is

[Fri Sep 07 21:33:06 2001]
ECM found a factor in curve #199, stage #2
Sigma=6849154397631118, B1=300, B2=3.
UID: beejaybee/Simon2, P1136 has a factor: 
9168689293924594269435012699390053650369

I've actually found a couple of larger factors using P-1, but these 
don't count as the factors found were composite. This can happen 
because P-1 depends on finding the GCD of a calculated value and 
the number being factored. (So does ECM.) If you're (un?)lucky you 
can find two factors at once, in which case the result of the GCD is 
their product. This is what ECM uses the lowm & lowp files for - 
ECM is often used to try to find more factors of a number with 
some factors already known; when you get a GCD > 1 you have to 
divide out any previously known factors to find if you've discovered a 
new one.
> 
> Of course some of these may be because the original check went to a lower 
> bit depth than the version of Prime95 that I used.

Naturally.

> I know from doing "deep 
> factoring" in the 60m range that one more bit of factoring can find a "lot" 
> of extra factors...

Over ranges of reasonable size, the number of factors you find 
between 2kp+1 and 2Kp+1 should be independent of p, i.e. the 
expected distribution of smallest factors is logarithmic. For factors 
of a particular absolute size, larger exponents make finding factors 
easier. The effort involved is (ignoring complications resulting from 
computer word length, which are certainly not insignificant!) 
dependent on the range of k, not the range of 2kp+1.

> So if we say that as a ballpark figure half of these are 
> due to an increase in factoring depth, then the error rate from this 
> admittedly small sample is 1.78% or in other words of the current 137,924 
> exponents less than 20m with only a single LL test we can expect to find 
> just under 2500 exponents with an incorrect result.

This is an interesting finding - roughly in line with other estimates of 
raw error rates - but I'm not sure I entirely understand the logic. I 
simply don't see how "missed factor" errors during trial factoring 
are related to incorrect residuals resulting from LL testing - except 
that the LL test wouldn't have been run if the factor wasn't missed.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: New exponents

2001-12-04 Thread bjb

On 4 Dec 2001, at 11:48, Nathan Russell wrote:

> For the information of the list, the folks who *want* to try to get 
> exponents below (the presumed) M#39 might want to look into manually 
> fetching work at 2:00 (IIRC) in the morning Eastern standard (7 or 8 GMT), 
> when the server releases exponents of folks who have stopped participating 
> without properly quitting.

The best time to get small exponents is 0601 GMT.

I thought the point of the original message was that someone 
specifically wanted to get larger exponents. The best time to do 
that is 0559 GMT. You do run a "risk" that someone will throw 
back a "small" exponent just before you grab one, but you can 
always throw it back & try again another day.

  You face a small risk that the original tester 
> will submit a result, but even in that case you'll get credit for the 
> double-check (though that would be a small consolation if a prime were 
> found).

This would be an interesting situation:

(a) I acquire an assignment, let it expire but carry on working on it

(b) You grab the assignment when it's recycled

Case 1: I finish first, find a prime and announce my discovery. I did 
the work but the exponent is assigned to you! Who gets the 
credit??? Well, I suppose I _could_ grab the credit by making a 
public announcement without checking the result in to 
GIMPS/PrimeNet, but this is definitely against the spirit of the 
project. Conversely you hardly deserve to get the credit for a 
discovery which you haven't yet made. I think it's better to withhold 
publicity until you finish, then we can be treated as co-discoverers.

Case 2: We both finish independently (it doesn't matter in which 
order, provided that we are not in direct contact with each other & 
aren't aware of each other's discovery until after we have 
communicated the result to GIMPS/PrimeNet). This case is clear 
cut, we're co-discoverers.

Case 3: You finish first & communicate your discovery through 
GIMPS/PrimeNet in the "usual" way. This case is also clear, you're 
the discoverer.

This probably needs to be spelled out in legal language just in case 
it happens in a situation where a substantial cash prize is involved 
(enough for it to be worthwhile paying to fight a court case).


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Re: Factoring benefit/cost ratio

2001-12-04 Thread bjb

On 4 Dec 2001, at 1:19, Paul Leyland wrote:

> 
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] 
> 
> 
> > 
> > (Aside - we're rather more likely to find a 75-bit factor than a 75-
> > digit factor. In fact, finding a 75-digit prime factor of a 
> > "no factors 
> > known" Mersenne number with an exponent under 80 million would 
> > be a significant achievement in itself.
> 
> Finding a 75-digit prime and non-algebraic factor of any integer with
> more than, say, 200 digits would be a significant achievement.  The
> record today is 55 digits; it reached 53 digits 3 years ago and only a
> handful of factors with 50 or more digits have been found ever.  I have
> precisely one of them to my credit 8-)

Sure. Not quite the same since there appears to be no certificate of 
primality, but on 30 Aug 2001 there was a message on this list to 
the effect that M727 (c219) = prp98.prp128. So much ECM work 
was done on M727 (before the NFS people started work) that it is 
highly unlikely that there are any factors < 10^50, which means 
that at least the 98-digit probable prime is almost certainly a 
genuine prime. (Maybe that's been proved by now. ECPP on 
general numbers of around 100 digits isn't very expensive.)

I think the 55 digit record applies to ECM. A number of much larger 
factors (not counting cofactors) have been found using number field 
sieve techniques.
> 
> > At the moment, having found one factor, we quit. That's sufficient 
> > effort for the purpose of trying to find Mersenne primes. A little 
> > more work might break the cofactor down further.
> 
> Actually, some of us don't quit.  But we're a small bunch of weirdos,
> and we only work on tiny exponents anyway.

I was speaking for the project rather than myself. I'm also one of 
the small bunch of weirdos.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-04 Thread bjb

On 3 Dec 2001, at 20:38, [EMAIL PROTECTED] wrote:

[... snip ...]
> Nevertheless, I stand by my assertion that "a 'Factored' status is
> better for GIMPS than a 'Two LL' status" ... insofar as our LL testing
> remains less reliable than our factor verification.  Double-checking
> certainly improves LL result reliability, but I think our record shows
> that a verified factor is still slightly (by a minute but nonzero
> margin) more reliable an indicator of compositeness than two matching
> nonzero LL residues.

AFAIK our record does _not_ show any such thing. In _theory_ 
there is a very small chance that we _may_ have accepted an 
incorrect residual even after double-checking. There is a small 
chance in such a case that the residual should have been zero, i.e. 
we missed a Mersenne prime.

I've triple-checked thousands of small exponents - some of the 
ones where the accepted residual was recorded to only 16 bits or 
less, which makes the chance of an undetected error _much_ 
greater (though still quite small) - so far no substantive errors in the 
database have come to light. A very few (think fingers of one hand) 
instances of incorrectly matched residuals have come to light - 
completing the double-check in these cases proved that one of the 
recorded residuals was correct.

[... snip ...]
> Some error sources would be more likely to affect the longer LL test
> than the shorter factor verification.)

Indeed - if errors occur at random intervals, results should get less 
reliable as runs take progressively longer. However there does 
seem to be a wide variation in the reliability of systems. Some 
seem to have a lot of problems, some very few (if any). 

I've detected four problems on my own systems so far; two of these 
were caused by a failed cooling fan on a P100 system (unlike 
current systems, the cooling was _almost_ sufficient even without 
forced ventilation); one was caused by a configuration error - 
running Glucas on an Alpha system I inadvertently allowed "fast" 
arithmetic and turned off error checking, which was a really stupid 
thing to do on a system with a 53-bit mantissa FPU - as this was a 
QA run on a 33 million range exponent, I was lucky to lose only 3 
months work. The last one I never got to the bottom of, but I 
strongly suspect that Prime95's workspace memory was clobbered 
during a software installation on a Windows 98 system.
> 
> > It matters to those who are attempting to fully factor Mersenne
> > numbers, but that's a different project altogether,
> 
> Some of us like to participate in both.  :-)
> 
> > and one that is decades (at least) behind GIMPS. The only reason we
> > do any factoring at all is to reduce the time spent on LL testing.
> 
> But if factoring is not really part of GIMPS's purpose (and I agree it
> isn't), how can a separate factoring effort be "behind" GIMPS at all?
> Aren't they measuring their progress in a different, non-comparable,
> dimension?

Indeed.

There is still a distinction between finding a factor - _any_ factor 
will do, it doesn't even matter if you find a composite factor which 
you can't break down further - which eliminates the need to run a 
LL test in order to eliminate a Mersenne number as a candidate 
prime, and finding all the factors (or even just the smallest factor) of 
a Mersenne number.
> 
> > Besides, if you do manage to find a 75-digit factor of a
> > 2-million-digit Mersenne number, that still leaves a 125-digit
> > remainder. Really not much help :-)

(Aside - we're rather more likely to find a 75-bit factor than a 75-
digit factor. In fact, finding a 75-digit prime factor of a "no factors 
known" Mersenne number with an exponent under 80 million would 
be a significant achievement in itself. The slip doesn't alter the 
author's argument.)

At the moment, having found one factor, we quit. That's sufficient 
effort for the purpose of trying to find Mersenne primes. A little 
more work might break the cofactor down further.

Attempting to test the primality of the cofactor - even using PRP 
rather than a stringent test - would be interesting, but would 
unfortunately be about as expensive as running the LL test that 
we've managed to avoid!

> [ big snip ]
> I'm saying that it is rational for someone to decide to factor past
> the Prime95-calculated tradeoff points, and that it is unjustified to
> criticize "extra" factoring on the grounds that going past the
> Prime95-calculated tradeoff points is wasted effort.

It's rational for someone to have fun. If they want to factor past the 
Prime95-calculated tradeoff points, that's fine. Even better if they 
record their unsuccessful efforts, so that other people don't waste 
time searching the haystack for the needle that isn't there.

There is another point here. We _can_ (by trial factoring, and at a 
cost) eliminate at least some prime candidates from the class of 
Mersenne numbers which are so large that we will _never_ be able 
to LL test them - not even if the whole 

Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-03 Thread bjb

On 3 Dec 2001, at 20:45, Daran wrote:

> Shouldn't that be 1.015 for double-checking assignments?

I think 1.03. However you do have a point. P-1 limits do depend on 
the trial factoring depth, and are much smaller for DC assignments 
than for first tests, so there is already something "built in".
> 
> Also does the cost part of the calculation recognise the increased cost of
> trial-factorisation after 2^64?

Yes. The trial factoring depth is constant at 64 bits from ~8.5 
million to ~13 million. Don't forget that the number of candidates 
which need to be checked is _inversely_ proportional to the 
exponent.
> 
> I've noticed on occasion that I've had to do an extra round of trial
> factoring before proceeding with an doublecheck.  This indicates that
> previous factorisation has been non-optimal, or have the estimates for the
> relative costs of factoring vs. LL testing changed with the introduction of
> new hardware?

I think the latter has something to do with it - PPro is about twice 
as efficient at factoring compared with LL testing as a plain 
Pentium. This is because of much improved pipeline organization 
including the provision of spare registers enabling speculative 
execution, which greatly increased the throughput of the floating 
point unit in particular.

Another factor is that early versions of the program were unable to 
factor as deep as current versions. 
> 
> Finally if P-1 factorisation were to be spun off into a separate work unit,
> then the optimal arangement would be to trial factor while
> cost_of_trial_factoring * chance_of_P-1_factoring is less than
> cost_of_P-1_factoring * chance_of_trial_factoring.  Then P-1 factorise.
> Then complete trial factorisation according to the above formula.

Interesting - but I think the effect would be small.

What about factoring to a "fractional" depth? With a roughly 
logarithmic distribution of factors, surely about half the factors 
between 2^n and 2^(n+1) would be smaller than 2^(n+0.5), whilst 
searching to 2^(n+0.5) would take only about 41% of the time 
taken to search the whole interval.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Mersenne Digest V1 #912

2001-12-02 Thread bjb

On 2 Dec 2001, at 19:57, Gordon Spence wrote:

> >From: "Steve Harris" <[EMAIL PROTECTED]>
> >
> >George did say that, and I was aware of his statement, but that still has no
> >effect on the point I was making.
> >George's GIMPS stats also give no credit at all for finding factors,
> 
> Tell me about it, over 150,000 and no mention anywhere

Ah, but George's GIMPS stats encourage factoring by removing LL 
testing credit when a factor is subsequently found. (Either you 
should have done more factoring before you started LL testing, or 
the factoring you did was expensive!)

> >From: [EMAIL PROTECTED]
> >
> >The other problem here is that the "known factors" database does
> >not include the discoverer.
> 
> A particularly sore point. If we maintained a top "savers" list whereby for 
> every factor found you were credited with the time an LL test would have 
> taken, then I and the other "Lone Mersenne Hunters" would pulverise these 
> big university teams.

Umm... let's put it this way. Who gets the credit for finding the 
factor 2p+1 when p+1 is divisible by 4 and both p & 2p+1 are 
prime? That's a _big_ bunch of numbers ... I'm not sure that there 
are an infinite number of 3 mod 4 Sophie Germain primes, but there 
certainly are a _lot_ of them... and I think you have to credit them 
all to the person who proved the theorem. 
> 
> 150,000 factors in the 60-69m range, at an average of 27.2 P-90 years each 
> - h  just over 4,000,000 years saved

With due respect, I don't think it's entirely reasonable to award 
credit for more effort than was actually expended, or for more effort 
than it would have taken to run two LL tests.

I think some realistic formula for finding the factor 2kp+1 would look 
like:

k>=2^64   2 x LL test CPU time
2^63 <= k < 2^64   1 x LL test CPU time
2^62 <= k < 2^63   0.5 x LL test CPU time
2^61 <= k < 2^62   0.25 x LL test CPU time
etc etc

_provided_ that no credit was given for factoring work which failed 
to find a factor.

However there are clearly philosophical differences here as well as 
practical ones - what Gordon says is clearly absolutely true in the 
literal sense.

Possibly the best idea is the simplest - leave the current 
procedures alone!


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-02 Thread bjb

On 1 Dec 2001, at 17:39, George Woltman wrote:

> This is because my rather limited reporting software only adds up the
> LL results in the verified and one-LL-tests databases.  Once an
> exponent is factored it is removed from those databases.

The other problem here is that the "known factors" database does 
not include the discoverer.
> 
> I prefer a factor to a double-check.  But it is hard to quantify
> "prefer" in a mathematical formula for computing trial factoring
> limits.  Prime95 uses the formula:   cost_of_factoring must be less
> than chance_of_finding_a_factor times 2.03 * the cost_of_an_LL_test.
> 
> This should maximize GIMPS throughput.  The 2.03 is because we must
> run two (or more) LL tests to do a double-check.

Again there is a complication, since the ratio of time to do factoring 
assignment X to time to do LL/DC assignment Y varies according 
to the processor. e.g. PIIs are relatively quick at factoring, whereas 
P4s are much more efficient LL testers.
> 
> P.S.  I'll comment on the M#39 news later.  For now lets celebrate our
> grand accomplishment rather than worry about non-optimal press
> coverage.

Hear hear. Congratulations to the discoverer (whoever he/she is), to 
George on his program finding a fifth Mersenne Prime, and to 
everyone involved in the project, without whom we wouldn't have 
reached this milestone.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M#39 news!

2001-12-01 Thread bjb

On 1 Dec 2001, at 17:47, Alexander Kruppa wrote:

> Warut Roonguthai wrote:
> > 
> > http://www.academicpress.com/inscight/11302001/grapha.htm
> > 
> 
> Look like the cat is out of the bag now - it's 2^13,466,917 - 1. Was
> this early publication indended? I thought the press release was due
> only after the independent double check completed, but then they quote
> Tim Cusak of Entropia, which makes it sound like an official
> announcement. Or is the official double check finished already?

Umm. If this is true I'm _very_ annoyed about the leakage. My 
hope is that it _isn't_ true that the exponent is 13466917. "Official" 
supporting evidence is admittedly thin, but that Mersenne number 
has 4,053,946 digits - if that was true, I would have expected 
George's initial announcement to say "over 4 million digits" rather 
than "well over 3.5 million digits" (which would nevertheless be 
true!). Of course I could simply be misreading George's mind - 
possibly if he had said "over 4 million" it would have narrowed the 
field sufficiently to make identification easy.

But is it _really_ too much to ask people to wait just one more 
week for the official verification run to complete? Maths isn't like 
politics, what's true today won't be different tomorrow...

I would strongly suggest that procedures are changed so that the 
next time a Mersenne prime is discovered, no information at all is 
released except to prior discoverers of Mersenne primes and any 
others (at George's discretion) who might be in a position to do the 
official verification run.

Incidentally I did reply to a request for information about M39 which 
appeared on the NMBRTHRY list yesterday morning. I was very, 
very careful to include no information which has not been released 
by George in his messages to this list, and I made sure George 
got a copy of my reply.

Irritated
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Trial factoring time

2001-11-27 Thread bjb

On 26 Nov 2001, at 19:18, George Woltman wrote:

> Hi,
> 
> At 12:46 AM 11/27/2001 +0100, george de fockert wrote:
> >One of my machines does trial factoring.
> >Now in the M1790 range, factoring to 2^66.
> >This takes a very long time, is it better to let it double check ?
> 
> It is a matter of personal preference.  You can figure out how long a
> double-check would take at http://www.mersenne.org/bench.htm
> 
> Another option for slow machines is ECM factoring.  See
> http://www.mersenne.org/ecm.htm for details (this is not an
> automated project)

Or P-1 factoring. Again not automated.
> 
> >Or is it time for a more efficient trial factor implementation (if
> >possible ?).
> 
> If someone wants to volunteer to improve factoring performance that
> would be great.  The P4 could use an SSE2 implementation. 
> CPU-specific factoring code over 64 bits could be written.

I had a good look at this a couple of years ago. There are 
essentially two parts to the trial factoring "problem": one is 
generating candidate factors (or, rather, quickly avoiding those 
values which cannot for one reason or another possibly be factors), 
the other is actually trying the factor.

Whilst I was able to get reasonably close to the performance of 
George's code for trying the factor - possibly even a tiny bit better 
for factors with only just over 2^64 - I got nowhere near the 
performance of his candidate elimination code.

One of the lessons I learned from the effort is that multiple 
precision arithmetic is _hard work_ on systems which have only 
one carry flag. If SSE/SSE2 enables you to have multiple carry 
flags then you could effectively try two or more factors in parallel, 
this would yield a considerable performance increase. But I'm not 
sure offhand whether the instruction set supports this. In any case, 
slower systems (the ones that _should_ be running factoring?) 
don't in any case support SSE or SSE2.
> 
> While factoring could probably be improved tens of percent, it has not
> been a high priority item for me.  A 10% boost in factoring speed lets
> you factor (on average) 10% deeper, which finds about 1 more factor in
> every 650 exponents.  That isn't a great throughput improvement for
> the GIMPS project.

Indeed.

The other way to look at this (same conclusion) is that factoring is 
way, way ahead of LL testing, so any perceived inefficiency is not 
critical.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Primenet communication details.

2001-11-21 Thread bjb

On 20 Nov 2001, at 23:46, Martijn Kruithof wrote:

> Are you talking about a linux system?!
> 
> Add a cron job executing:
> 
> ifup ppp0 (or your distro's equivalent to get up the dail up link)
> sleep 10 ; mprime -c ; sleep 5 ; ifdown ppp0
> 
> Every week for instance (also configure mprime accordingly)
> Configure mprime to not contact priment automatically.

... but isn't there an option for autodial - in linux as well as
windoze?

Personally I don't use it because I've had trouble with autodialled
connections not dropping - and I'm on a pay-by-time connection :(

BTW if you use the above script you should set your "days 
between sending new end dates" to _6_ rather than 7. The reason 
is that mprime -c will do nothing unless prime.spl exists, and 
prime.spl will not be created until the next multiple of 65536 
iterations following N days since the last PrimeNet checkin.

Gareth, the server used is 216.120.70.80. Yes, the destination port
used is (TCP) port 80. Unless you have the network protocol set to
RPC, which is deprecated. Unfortunately the source port is 
"random" but if you configure your firewall to allow outbound 
connections to TCP port 80 to 216.120.70.80 and inbound packets 
from any host on any TCP port provided the connection is open you 
will be OK.

If you want to be _truly_ paranoid, allow inbound packets for open 
TCP connections on source port 80, any destination port from 
216.120.70.80 only, but don't be surprised if your web browser has 
difficulty reaching other hosts!

Less paranoid configuration to allow web browsing & PrimeNet 
comms on port 80, but block attempts to activate stealth trojans
(which of course you shouldn't have):

permit inbound, open TCP connections, source port 80, any host
block inbound, open TCP connections, any other source port, any 
host

You can add extra "permit inbound" for other services as needed, 
but ftp is a problem - you either allow inbound connections on port 
20, or use passive ftp & accept incoming packets on open TCP 
connections on random ports.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: AMD XP 1600+

2001-11-21 Thread bjb

On 20 Nov 2001, at 21:11, George Woltman wrote: 

> At 07:54 PM 11/20/2001 -0500, A & T Schrum wrote:
> >I looked but hadn't found how an AMD XP 1600+ should identify
> >itself within Prime95. It calls the processor a Cyrix 6x86 running
> >at 144 MHz when it is running on a 1400 MHz motherboard. I set it
> >to an Athlon at 1400, are the settings correct for an AMD XP 1600+?
> 
> Yes.  BTW, if someone wants to improve the code in prime95 to
> properly identify AMD chips, I'd be glad to include it in a new
> release. Contact me and I'll point you at the code that needs
> tweaking.

I changed the CPU in one of my systems (running mprime) from 
Athlon 1200/266 to Athlon XP 1700+. I did nothing except change 
the CPU speed in the menu. The speed improved - by about 17%, 
which is about what is expected - and it's running 7C cooler, but 
that almost certainly has more to do with the fact that I also 
changed the cheap "AMD approved to 1.3GHz" CoolerMaster 
heatsink/fan for a Thermaltake unit (with the 26 cu ft/min YS fan, 
not the 38 cu ft/min Delta fan, which might do a better job of 
cooling but is _very_ noisy!)

Is there an issue here in that Prime95 (& presumably NTPrime) 
detects the processor at startup but mprime doesn't? Or does the
program only "guess" the CPU type & speed when the program is
installed?

BTW Athlon XP all run at 266 FSB. It looks as though the "speed 
rating" relates to the actual core speed as follows:

Take "speed rating" (1600 in the case of an Athlon XP 1600+)
Add 500  (2100)
Divide by 1.5(1400)


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Unverified discovery makes Slashdot.

2001-11-15 Thread bjb

On 14 Nov 2001, at 21:27, Nathan Russell wrote:

> For better or for worse (I think for better - it should attract a few
> searchers) the new Mersenne report has made it to the popular
> technical/open-source/geek news site Slashdot.

Indeed. Downloads of Mersenne-related software from my anon ftp 
server are currently running at least 10x higher than normal. (So 
are attempts to misuse the server - it has been explicitly configured 
to reject "port bouncing" to a third party address - I'm not operating 
an anonymising service!)

> [ ... snip ...]
> -"The Primenet list has confirmed that while they still need to
> totally test it out (which should be done by the 24th), they believe
> that the number found today is the 39th positive." (To my knowledge,
> only George has directly stated that he views the result to be
> correct.  While I don't question his word, even if he is correct, the
> odds are in favor - but not *grossly* in favor - of the new prime
> being the 39th Mersenne - NOT the 39th prime, which is similiar to an
> error made in one of Cray's press releases IIRC)

The chance of a false zero residual after precisely (n-2) iterations 
due to hardware glitches or "random" software errors are surely 
only 1 in (2^n-1). If a zero residual is _genuinely computed_ then 
surely the chances are very, very, very high that a Mersenne prime 
has been discovered.

Conversely, and assuming that the probability of any single LL test 
being affected by a hardware glitch or data-value-independent 
software error is 0.01 (which is probably a low estimate), the 
probablity of two runs both going wrong independently and yet 
returning the same _64 bit_ residual is 1 in 1 * 2^64. This is 
the approximate probability that a Mersenne prime will be missed 
_despite_ the double-checking procedure; it is of course very 
small, but is absolutely COLOSSAL compared with the probability 
of a false discovery _without_ verification.

Verification is still, of course, neccessary in order to eliminate the 
possibility of a systematic error (in program or hardware); also, and 
probably most importantly, to eliminate absolutely the chance of a 
"forged" discovery.

BTW there are a large number of small exponents (almost all under 
1 million) for which there aren't even two matching 64 bit residuals - 
early programs often reported residuals to 16 bits, in some cases 
even less. For this reason I'm selectively triple checking these 
cases, using old systems which are too slow and/or too memory 
limited to be useful for anything else. I most certainly do _not_ 
expect to find any missed primes! If anyone wishes to assist in the 
task of cleaning up this anomaly in the database, please contact 
me privately.

As to the chance of the newly discovered unverified number being 
_numerically_ the 39th Mersenne prime (as opposed to the order of 
discovery), surely the announcement that it is "probably" M39 has 
to wait until all smaller exponents have been LL tested once, and 
not confirmed until double-checking passes that exponent.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: prime

2001-11-14 Thread bjb

On 14 Nov 2001, at 10:09, Henk Stokhorst wrote:

>Prime, VERIFIED   :  1
>Prime, UNVERIFIED :  1

Yes, I saw this in the report produced at 07:00 GMT this morning. 
Certainly wasn't there yesterday am!

Last week I joked "of course, we could find two tomorrow".

I have no knowledge as to what has triggered this report but the 
"obvious" explanation - that two primes have been found on the 
same day, and one has already been verified - sounds implausible.

Assuming that a single prime has been found, there could be a 
problem with the (automatic) PrimeNet status page generation: 
after all, this situation doesn't often arise...

Case 1) A prime is reported; the user gets excited and reports it 
again using the manual testing page. Could the PrimeNet 
automatic report generator consider this to be a double-check, and 
therefore verified - but also keep the "unverified" record? Note, 
normally double-check verification is done be George using the 
PrimeNet server transaction logs when he updates the database - 
usually on a Saturday - George's method eliminates multiple 
notifications, which will have the same offset when one of the 
Prime95 family of programs is used, or the same program, user & 
computer ID otherwise.

Case 2) Rather simpler - a prime is discovered during a double-
check; the report generator "knows" there is no prior notification 
that the exponent yields a Mersenne prime (so it must be 
unverified) yet the result "must be a verification" because the 
assignment is a double-check. So it gets counted twice, once in 
each category.

Case 3) No primes have been discovered, but someone has 
discovered how to insert false data into the server.

As I said above, I have no knowledge as to which of these possible 
explanations (if any) is true. Yet congratulations are obviously in 
order to anyone who really has discovered a new Mersenne prime 
(or primes!) - we've waited long enough! 

Please remember that, if the new prime is in the EFF prize range, 
publicity pending verification could jeopardize a claim on the prize, 
so it is understandable that any information which might be 
released in the near future may be incomplete to the point of 
sketchiness, just as it was during June 1999.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Fw: The Mersenne Newsletter, issue #18

2001-11-13 Thread bjb

On 12 Nov 2001, at 23:34, [EMAIL PROTECTED] wrote:

> Many recent double-check assignments involve a second round of
> factoring because (a) the trial-factoring breakpoints are higher now
> than they were when the first L-L test was assigned, and/or (b) P-1
> factoring had not yet been implemented in Prime95 when they were
> assigned for initial L-L tests.  Consequent elimination of once-L-Led
> Mnumbers by second-round factoring would account for some of the
> difference, though I doubt there've been 30,000.

Nothing like 30,000! The probability of finding a factor by going one 
bit deeper (given that trial factoring is already completed to ~60 
bits) is less than 0.02; the probability of finding a factor using P-1 
with "double checking" limits is typically 0.025 (variable depending 
on how deeply trial factoring has been done). 70,000 double check 
assignments would be expected to find about 2,000 factors.
> 
> In addition, some folks have concentrated on factoring Mnumbers that
> have been L-L tested but not yet DC'd, specifically in order to reduce
> the number of necessary double-checks.

Eh? Doesn't it make more sense to concentrate on factoring 
Mnumbers that haven't yet been L-L tested? That way "success" in 
finding a factor reduces the number of LL tests, as well as 
(eventually) the number of double checks.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Fw: The Mersenne Newsletter, issue #18

2001-11-12 Thread bjb

On 12 Nov 2001, at 3:37, Daran wrote:

> > * Over 70,000 double-checks completed.
> > * Over 100,000 exponents tested for the first time!
> 
> Does this mean that we're not doing enough double-checking?

Seems to imply that about 40% more first tests are being 
completed than double checks. If so then the gap between 
exponents assigned for first tests and exponents assigned for 
double-checking will continue to grow.

Personally I think the cut-off point may be a bit on the low side. If a 
"typical" DC assignment on a particular system takes less than 
ten days then IMHO the system concerned should be running first 
tests. That criterion puts the cut-off point somewhere around 600 
MHz.

But, as many people have pointed out in the past, what does it 
matter if people want to run first tests on "slow" systems? The 
work still gets done in the end!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 1st 6 P90CPU yrs jump!

2001-11-12 Thread bjb

On 11 Nov 2001, at 20:12, Nathan Russell wrote:

> >The output from the script is interesting, but it can be very
> >misleading for those of us who have recently upgraded equipment.

Yes, those who have recently upgraded equipment - or added more 
systems - will find their "hours per day" very far from static. I've 
been with GIMPS/PrimeNet for 3.5 years now, but for various 
reasons my hours/day has increased by 10% over the last two or 
three months.

I think perhaps the hours/day reported should reflect recently 
submitted work - say over the last 3 months - rather than the 
historical average since the user registered with PrimeNet.
> 
> Well, the way I see it the "P-90 Standard" itself is less than
> perfect.  It was computed on a machine that can no longer even be
> purchased new in some stores, 

Hmm. It's a Long Time since I saw a P90 system in a store. My 
guess is you'd be lucky to get more than $5 for a P90 system unit 
at an auction on e.g. eBay. If you _really_ want a P90 system unit, 
you'd probably be best advised to try your local landfill.

The point is NOT whether a particular system still exists, it's to 
define a "benchmark" which can be applied to _any_ system. 
Obviously you end up with an obsolete benchmark system - you 
can compare a "new" system with an existing one, but there's no 
way George could have used a 2GHz P4 as a benchmark system 
in 1995.

> on a version of the program well before the present version.

I thought the timings had been scaled (at least to v19); the current 
benchmark system is a PII-400 (?)

On a personal note, I just passed 100 P90 CPU years accredited 
by PrimeNet - but I didn't get a telegram from the Queen, so 
presumably the benchmarks _are_ wrong!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: number of processors participating

2001-11-08 Thread bjb

On 6 Nov 2001, at 16:09, Gerry Snyder wrote:

> In a recent ethics briefing I was told that running seti@home on work
> PC's was a no-no because there had been three break-ins to computers
> using that program as a back door.
> 
> No idea whether that is really the case, and no idea what
> communication scheme seti@home uses. Anyone familiar with this
> perceived problem?

I went & checked this out.

There do appear to have been a few incidents involving seti@home. 
In one of these the server was hacked & details of the participants 
were "stolen"; at least some of the e-mail addresses were 
subsequently used by spammers. This is of course a serious 
incident, but nowhere near as serious as the disclosure of credit 
card numbers and other personal information which happened last 
weekend due to a security breach of the Microsoft Passport 
system.

In at least one other case a number of systems at a site were 
compromised by installation of the seti@home client - but only 
because a "Back Orifice" type trojan had somehow become 
attatched to the copy of the client concerned - N.B. _not_ a direct 
download from the seti@home site.

This sort of thing has reportedly also happened several times with 
the RC5 client.

Note that the risk of "unofficial replacement" of clients downloaded 
from the net can be virtually eliminated by computing the MD5 
checksum of "official" binary images and posting this somewhere 
_before_ the binary itself is made available. The point is that it is 
almost impossible to modify a binary image without changing the 
MD5 checksum - in fact, to the best of my knowledge, this has not 
been demonstrated, even in a laboratory environment - a very great 
deal of trial and error would be required to match the 256 bit 
checksum; unlike some checksum algorithms, MD5 was designed 
to be reasonably quick to compute once, but impossibly expensive
to compute a very large number of times.

Virus checkers are pretty effective at detecting trojans, provided the 
virus database is kept up to date. 

Finally, reasonable configuration of a firewall (even a personal 
firewall product installed on the workstation itself) will prevent 
exploitation of a Back Orifice type trojan, even if one does manage 
to sneak in unnoticed - these work by creating a listener which 
allows those "in the know" to connect to the system using telnet, 
ssh or a similar protocol, using a non-standard port number.

I have been unable to trace any instances of security breaches of 
user systems due to running "official" copies of the seti@home 
client, or dependent in any way on client/server communications.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: SMT

2001-11-04 Thread bjb

On 3 Nov 2001, at 21:40, Kel Utendorf wrote:

> At 21:01 11/03/2001 -0500, George Woltman wrote:
>  >Can prime95 take advantage of SMT?  I'm skeptical.  If the FFT is
>  broken >up to run in two threads, I'm afraid L2 cache pollution will
>  negate any >advantage of SMT.  Of course, I'm just guessing - to test
>  this theory out we >should compare our throughput running 1 vs. 2
>  copies of prime95 on an >SMT machine.

I'm not sure I fully understand the way in which a SMT processor 
would utilise cache. But I can't see how the problem could be 
worse than running two copies of a program on a SMP system. 
This seems to work fairly well in both Windows and linux regimes 
(attatching a thread to a processor and therefore its associated 
cache, rigidly in the case of Windows, loosely but intelligently in 
the case of linux).

If an SMT processor has a unified cache, cache pollution should 
surely be not too much of a problem? Running one copy & thereby 
getting benefit of the full cache size would run that one copy faster, 
(just as happens with SMP systems where memory bandwidth can 
be crucial) but the total throughput with two copies running would 
surely be greater. Especially on a busy system, where two threads 
get twice as many timeslices as one!

If there is some way in which the FFT could be broken down into 
roughly equal sized chunks, it _might_ be worth synchronizing two 
streams so that e.g. transform in on one thread was always in 
parallel with transform out on the other, and vice versa. Obviously 
you'd need to be running on two different exponents but using the 
same FFT length to gain from this technique. Whether this would 
be any better than running unsynchronized would probably require 
experimentation.
> 
> Could things be setup so that factoring and LL-testing went on 
> "simultaneously?"  This would speed up the overall amount of work
> being done.

Because trial factoring, or P-1/ECM on _small_ exponents, have a 
very low memory bus loading, running a LL test and factoring in 
parallel on a dual-processor SMP system makes a lot of sense. I 
suspect the same situation would apply in an SMT environment.

The "problem" of mass deployment (almost everyone in this 
position, instead of only a few of us) is that there is a great deal of 
LL testing effort required in comparison to trial factoring, so running 
two LL tests in parallel but inefficiently would bring us to 
"milestones" faster than the efficient LL/trial factoring split.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: number of processors participating

2001-10-31 Thread bjb

On 30 Oct 2001, at 23:28, Steinar H. Gunderson wrote:

> 200ms? Wouldn't this be an error? I can't really imagine that one
> would typically have only five time slices per second :-)

No, it's the right order of magnitude. The table I have to hand (for a 
300 MHz UltraSparc IIi running Solaris 2.6) has a whole bundle of 
values for different priority levels - 80+ excluding the real-time ones! 
- they range from 10 mS to 750 mS.

There was a drop in these values during the 80s and early 90s but 
they had to increase again when processors started depending 
heavily on cache - task switching tends to trash caches. Also as 
processors gain more & more registers etc. the number of clocks 
required to save & restore process contexts tends to increase. This 
balances out the increasing speed of processors to a considerable 
extent.

The point is that a process which becomes computable as a result 
of I/O completion etc. at a higher priority than the currently 
executing process will pre-empt even before the minimum timeslice 
has expired. If you have a lot of I/O going on then process 
switching will occur hundreds or thousands of times per second. If 
you have no I/O going on then it really doesn't matter how 
infrequently task switching occurs, as you have no means of telling 
the internal state of the system without I/O occurring.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: SV: Mersenne: number of processors participating

2001-10-31 Thread bjb

On 30 Oct 2001, at 14:39, John R Pierce wrote:

> near as I can guess, the issue here is that Prime95 is running a few
> priority notches above idle and when another process tries to run at a
> lower priority it will stall behind prime95.
> 
Well - a process that keeps being preempted will tend to rise in 
priority & a process that keeps preempting will tend to fall (unless 
it's running at "real time" priority) - so that might explain it.

Reducing the timeslice would still help in the "steady state" when 
continuous heavy I/O is occurring, by reducing the proportion of the 
time the compute-intensive process has the CPU.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: SV: Mersenne: number of processors participating

2001-10-30 Thread bjb

On 29 Oct 2001, at 19:37, John R Pierce wrote:

> I've had similar problems with a few other multimedia sorts of
> junkware. Near as I can tell, some of these things put their video or
> animation thread at Idle_Priority+1 or something, and it gets eaten
> alive by Prime95.

Isn't it the old problem - no matter what priority a process is 
running at, unless it's interrupt driven it won't preempt a process 
running at a lower priority.

The problem here is that the multimedia stuff wants to do a very 
little work but very often. It gets slowed down because Prime95 
hangs on to the processor until its timeslice expires - it almost 
never has to wait for some external event.

Ideally the multimedia stuff would be driven by timer interrupt. But 
for some reason (maybe something to do with there being a limited 
number of timer channels, and those having rather poor resolution) 
this approach seems to be quite rare on PC systems.

One way to improve the performance in these circumstances is to 
reduce the minimum timeslice for low-priority processes. This will 
cause the task scheduler to be "busier" and therefore reduce the 
overall performance to some extent, but multimedia type 
applications will coexist much more happily with compute-intensive 
tasks if this is done.

Sorry, I have no idea how to do this, or even whether it is possible, 
in any of the versions of Windows. 

The linux 2.4 kernel does this almost automatically, by having a 
much smaller minimum timeslice for idle-priority processes than for 
processes running above idle priority. (The timeslice is reduced 
again for processes running at unusually high priority, so that they 
can't hog the whole system quite so easily.) I believe the timeslice 
parameters are tunable (without having to recompile the kernel), but 
I have no personal experience of actually doing this.

Another other way to "fix" the problem is to have the compute-
intensive process voluntarily relinquish its timeslice at intervals 
which are much shorter than the minimum timeslice (which is 
typically of the order of 200 ms). This reduces the efficiency of the 
compute-intensive task to some extent but does make it coexist 
better. I suppose it would be possible to build this into Prime95; if 
this is done I would like options to be "multimedia friendly" or 
"optimally efficient" - probably the best way to implement would be 
to have the code contain the relevant system calls but to NOOP 
over them if efficiency is demanded.

The remaining problem with this approach is that how often you 
would want to make these system calls would depend very heavily 
on the processor speed. Relinquishing the timeslice very frequently 
would enable even slow systems to run multimedia pretty 
seamlessly, but at a heavy cost on all systems. Placing the 
system calls in a position where they would be effective but not too 
costly, across a large range of processor speeds and a large range 
of FFT run lengths, would not be a trivial task.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Intel IA64 & AMD Sledgehammer

2001-10-30 Thread bjb

On 30 Oct 2001, at 10:37, Lars Lindley wrote:

> Are there plans for making mprime and prime95 capable of using 64bit
> processors like IA64 and AMD Sledgehammer?
> 
> I know that these processors are able to run 32 bit code but will
> there be a 64bit optimized version of mprime/prime95?

IA64 may run IA32 code but I seem to remember this is far from 
optimal. Hammer (which AFAIK exists only as vapourware) is 
supposed to run IA32 code efficiently. 

What would we expect to gain from extended addressing, or 64 bit 
integer operations? Probably not much for LL testing ... that's using 
double-precision floating-point, which is already native 64+ bit 
hardware in IA32. Trial factoring would almost certainly speed up a 
fair bit with 64 bit integer hardware operations.

There may be mileage in further optimization for Itanium (IA64) so 
as to take better advantage of the parallel execution paths than the 
current "scalar" code. However Itanium processors are still a 
frightening price; I can't see them becoming "consumer items" in 
the near future.

Also, Glucas already works very well on Itanium systems!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: number of processors participating

2001-10-29 Thread bjb

On 29 Oct 2001, at 19:30, Henk Stokhorst wrote:

> [... snip ...] However it has already been
> implemented in the latest version (v21). That version contains more
> improvements so I wondered if it wouldn't be a good idea to inform
> users through the occasional newsletter. Particulary because it gives
> a 10% improvement for Pentium I, II and III users and it skips P-1 if
> it has been done.

Umm - I haven't noticed any significant improvement in v21 speed 
on Pentium or Pentium II - the big changes are implementing 
prefetch, which is only applicable to AMD Athlon family, PIII and 
faster Celeron processors, and exploiting the SSE2 instruction set 
on Pentium 4 only.

Apart from (sometimes) skipping P-1, the changes between v20 
and v21 are pretty well cosmetic if you're using a 486 / Cyrix / AMD 
K6 / Intel Pentium (Classic or MMX) / Pentium Pro / Pentium II / 
Celeron < 533 MHz CPU. There _are_ some other changes - 
including a bit of fine tuning of the exponent / FFT run length size 
breaks - but nothing which really makes an upgrade look 
inescapable. 

In fact, these older systems are more likely to have a memory 
constraint than newer systems with faster processors; due to the 
inclusion of the Pentium 4 specific SSE2 code, the v21 binary has 
a significantly bigger memory footprint, so systems which won't 
benefit from the prefetch code & which are feeling memory 
pressure might be better _not_ upgrading.

The speed improvement from v20 to v21 on a PIII or Athlon system 
should be somewhere close to 25%, rather than 10%. On these 
systems an upgrade seems highly desirable.

If you're still running v20 (or earlier) on a Pentium 4, then quite 
frankly you really SHOULD upgrade. NOW. The execution speed 
will approximately treble.

As for "reduced participation" - whilst other reasons certainly do 
have an effect, I've previously mentioned two other possible reasons:

(1) adverse publicity stemming from the prosecution of a sysadmin 
for running RC5 clients on "his" systems without the agreement of 
the management at the college which employed him;

(2) steep rises in electricity prices and unreliability of supply in 
some places e.g. USA West Coast deterring people from running 
extended jobs.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: number of processors participating

2001-10-29 Thread bjb

On 28 Oct 2001, at 0:28, Terry S. Arnold wrote:

> Another consideration is that many system/network administrators have
> gotten ludicrous about what they will allow on their networks. They
> think that Prime95 just might let in a virus or even worse spill
> "company secrets". By and large they are totally ignorant of the real
> issues involved with securing networks. All most of them know about
> the implications of the various protocols in the TCP/IP suite was what
> it took get their MCSE if they have even that much training.

As a "system/network administrator" specialising in security 
matters I just _have_ to answer this one.

1) It's perfectly true that there are a large proportion of sites with 
incompetent sysadmins - especially from the point of view of 
networking. Especially in small companies, where the sysadmin 
function tends to be bolted onto another job as a low-priority "extra" 
task.

2) AFAIK none of the MSCE courses cover security in any depth at 
all. In fact the approach seems to be the _reverse_ i.e. teach 
people how to set up & administer systems in an unduly risky way, 
without even bothering to mention basic security tools or 
methodology because they're not essential to _Microsoft_ 
networking in a laboratory/classroom environment.

Based on recent experiences with Code Red & Nimda, 95% of the 
problems on our site came from the 1% of the systems located in 
business incubator centres attatched to the University but 
"administered" by the businesses themselves. Basically it's rare for 
these people even to be aware of most of the services running on 
their systems (anything that comes preloaded on the system gets 
run, irrespective of whether it's absolutely neccessary or absolutely 
unneccessary); as for applying critical updates, they seem to be 
trained to think one of (a) it's much too hard, (b) it will break the 
functionality, (c) they simply don't understand why they need to 
bother with such things. 

_Despite_ how easy it is to run Windows Update.

The only way I've been able to get these people to apply updates is 
to get sanctioned to scan their systems for vulnerability to Code 
Red & Nimda, & block _all_ access to vulnerable systems until 
they get patched (or take down the IIS service). To my knowledge, 
many ISPs had to take similar action.

At least _some_ universities & Fortune 500 companies have 
competent sysadmins, but there are a whole lot of "mom & pop" 
businesses out there; a high percentage of them would be an 
absolute pushover to anyone "wearing a black hat", even if IIS 
installations have now mostly been patched to a reasonable level.

As for distributed computing projects being a security risk - 
basically I think in many cases _management_ may be misusing 
"security" as a screen for filtering out anything _they_ don't 
understand. In my experience few of these people are aware of the 
scale of network _abuse_ (note, not _neccessarily_ a threat to 
security) that goes on by way of end users installing peer-to-peer 
"file sharing" software on their workstations; probably 99% of the 
files "shared" over these P2P networks are in effect illegal 
distributions of copyrighted material. They're certainly _not_ aware 
that Windows systems with e.g. Kazaa clients are quite capable of 
"sharing" not just the offending copyrighted material but also 
everything else on the system - or attatched to it through open LAN 
shares. Yes, including "company secrets". Quite apart from that, 
the volume of traffic involved with these P2P networks can be huge, 
certainly enough to seriously impact network links.

(Before anyone takes me to task on the above paragraph, quite 
frankly I am totally opposed to the DMCA, the proposed SSSCA 
and all similar legislation. But I am also opposed to unauthorized 
distribution of copyrighted material. IMO the force of the law should 
be applied against those individuals making the copies, not against 
those who write software or the posession of hardware which might 
possibly be used to make illegal copies.)

Under these circumstances I find it hard to understand how anyone 
can think that compute-intensive, network-friendly applications can 
be a problem.

As for "letting in a virus" - if people really thought that, they just 
wouldn't use Microsoft products. How much of a threat was Code 
Red or Nimda infection on a system which wasn't running Microsoft 
Exchange, Microsoft Internet Information Server or (in the case of 
Nimda) Microsoft Internet Explorer? Well, _other_ infected systems 
might load up your network to some extent, but _your_ system 
certainly wasn't going to get infected!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Mprime crash

2001-10-27 Thread bjb

On 26 Oct 2001, at 11:36, Nathan Russell wrote:

> >Presumably a badly written Linux driver
> >can cause the same problems as a badly written Windows driver.

Yes. There is however a subtle difference. Linux drivers come with 
source code, so if there is a bug, any competent programmer can 
fix it. Also there is almost invariably a mechanism for feeding 
patches back into the source tree. On the other hand, Windows 
drivers come wreathed in obscurity, usually clouded even further by 
licenses which prevent disassembly and therefore effectively make 
it illegal for anyone other than the original author to post a fixed 
version.
> 
> IIRC, Linux drivers that are kernel modules do run in real mode;
> someone on the list please correct me if I'm wrong.  

No, they don't run in real mode - which gives access to only the 
bottom megabyte of memory, in 16-bit mode, i.e. you're using the 
system as a fast 8086. Win 9x/ME drivers _do_ run in real mode!

Some parts of some drivers (in proper 32-bit operating systems like 
linux, also Windows NT/2K/XP) _need_ to run in ring 0, which does 
allow unrestricted access to all memory, thus allowing the driver to 
trample on memory belonging to other processes. Part of the art of 
writing successful drivers is to do as little as possible in ring 0.
> 
> This raises the question of why some folks have mentioned that drivers
> under NT/2K don't cause these sorts of problems

They're much less likely to. For a start, it's easier to write ring 0 
code than real mode code on a 32-bit system; e.g. there's less 
room to confuse 16-bit and 32-bit addressing modes. Secondly, as 
I pointed out above, you can get out of ring 0 to do most of the 
work, so most of the time you have full protection from the OS to 
prevent you from clobbering someone else's memory.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Lucas Wiman Mersenne Prime FAQ

2001-10-25 Thread bjb

On 25 Oct 2001, at 10:31, Ken Kriesel wrote:

> M33219278 is the smallest Mersenne Number with at least 10^7 digits,
> but M33219281 is the smallest Mersenne Prime with at least 10^7
> digits.

Um, I seem to remember testing that number & confirming Rick 
Pali's discovery that it is _not_ prime.

Perhaps it would be fair to say that M33219281 _was_ the smallest 
_candidate_ 10 million (decimal) digit Mersenne prime (pending LL 
testing). It isn't even that now.

33219281 was, is and always will be the smallest _prime_ value of 
n such that 2^n-1 has at least 10 million (decimal) digits; we know 
that a Mersenne prime _must_ have a prime exponent, therefore 
there cannot possibly be a 10 million (decimal) digit Mersenne 
prime less than M33219281.

There are (AFAIK) two status changes left for M33219281: one day 
(maybe reasonably soon) someone will find a proper factor of this 
number, and one day (probably decades, centuries or even millenia 
away) it will be completely factored.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Glucas v 2.8c released.

2001-10-21 Thread bjb

On 20 Oct 2001, at 9:44, Mikus Grinbergs wrote:

> Interesting to compare the performance numbers given for the Itanium
> running Glucas v2.8c against my Thunderbird running mprime v21.4 :
> 
>  -  At the smallest FFT length, the Itanium is WAY faster.

Probably a "large cache" effect.
> 
>   this performance difference decreases until
> 
>  -  At FFTs 640K-2048K, the Itanium is a little bit faster

Yes, by 640K FFT run length you need > 2MB cache to run with no 
significant access to "real" memory.
> 
>   then the performance difference increases until
> 
>  -  At the largest FFT length, the Itanium is noticeably faster
> 
> 
> Is it memory-bandwidth that lets the Itanium pull ahead at the
> large FFT lengths ?
> 
I would suspect that the main cause is that the Thunderbird has a 
fairly limited TLB - as the amount of memory involved in the 
workspace increases, the proportion of recursive TLB lookups will 
increase faster than it does with Itanium and so the memory 
access will effectively slow down.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: AthlonXP

2001-10-15 Thread bjb

On 14 Oct 2001, at 13:40, Francois Gouget wrote:

>Nope. It is still 0.18 microns (see Anandtech). If you have a
>source
> claiming otherwise I would like to see it. They plan to switch to 0.13
> early next year. Then, they should be able to increase the frequency
> big time.

Oops, I got that wrong. Sorry. Comes of trusting a failing memory :(
> 
> 
> > which means keeping it cool should be a bit easier.
> 
>They say it runs 20% cooler. It would probably run much cooler if
> they had switched to 0.13 microns. The reason they provide for the
> lower power consumption, is small architectural optimizations. But
> what I find most interesting is its low power consumption when idle.
> They introduced it for the laptop market but AFAIK it is also present
> in the regular desktop processor.

20% cooler = 60K? There should be frost on the heatsink ...

Do you mean 20% less power consumption? That seems 
reasonable. 

Denser circuits allow lower voltages which does result in a lot less 
power consumption per element. But there's less area for the 
waste heat to get out, so the cooling problem doesn't neccessarily 
go away...

>This poses a kind of dilemna since the energy used to run prime-net
> is no longer energy that would have been wasted otherwise. So you have
> to make a choice between preserving the environment and running
> prime-net (to some extent).

This is the case with most if not all of the mainstream PC CPUs 
made in the last couple of years. If you don't use the floating point 
unit the FPU turns itself off, there is a large drop in current draw 
and the CPU cools down. Alternatively, with many "micro" desktop 
& laptops, you need to turn on an extra fan if you run programs 
which exercise the FPU (as Prime95/mprime undoubtedly does).

The CPU current consumption of a 1.2GHz T'bird will drop by 
~30W when the FPU is inactive. However, given that the power 
drawn by the system as a whole is likely to be >150W (_much_ 
more if you have a large CRT monitor), the power saving by the 
system as a whole is not likely to be hugely significant.
> 
> [...]
> > Note particularly that e.g. 256 MB can be made up of one bank of 256
> > MBit, two banks of 128 MBit or four banks of 64 MBit RAM chips;
> > expect a performance difference of 5% - 7% between these
> > configurations even if the chip access speeds & timings are
> > identical. More banks are faster.
> 
>Hmmm, you need to distinguish motherboards that merely have
>multiple
> memory slots, from motherboards that have more than one data path to
> the memory. The only motherboards that I know of that can do the
> latter are some RDRAM based motherboards and the Nvidia nForce.
>In all other cases (the more common one?) putting multiple DIMMs
> should not affect performance one way or another.

No, this is the construction of individual DIMMs, not occupancy of 
DIMM slots, though populating two DIMM sockets with _identical_ 
single-bank DIMMs can get you the same performance boost.

VIA Apollo Pro chipsets can definitely take advantage of multibank 
DIMMs; the BIOS provided with some mobos (particularly Abit) 
allows you to tune the chip banking manually. I found on my Abit 
KT7A, with two identical double-bank SDRAM DIMMs installed, 
setting quadruple banking improved the speed of mprime by 7% 
compared with the default setting.
> 
>Yes, AMD better implement SSE2 in their processors soon.

Why? Might there not be a better way?

The point is that a PII-350 has more than enough power to do what 
most business users need, whilst tackling the problem of updating 
complex displays very rapidly (for games) is better handled in  
dedicated hardware in the graphics adapter.

What I would like to see in a CPU is a means where you could 
upload your own microcode, enabling design of specific instruction 
sets to handle particular problems very efficiently. For numerical 
work, the capability to handle _very_ long integers directly would 
be extremely useful. 
Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: AthlonXP

2001-10-15 Thread bjb

On 14 Oct 2001, at 19:57, George Woltman wrote:

> The P4 has two advantages.  One is memory bandwidth.  The other
> is SSE2.

If memory bandwidth was a primary reason then the P4 would run 
the standard "P6" (Pentium Pro) code efficiently. If you remember 
the postings when the P4 was originally released, this was 
definitely not the case.
> 
> When AMD finally implements SSE2 it should be a screamer.  The lower
> latencies could be a big plus.

Assuming, of course, that the complications involved in 
implementing SSE2 don't clobber the performance (in terms of 
increased latency). Which is what seemed to happen with the P4.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: AthlonXP

2001-10-14 Thread bjb

On 14 Oct 2001, at 7:30, Jean-Yves Canart wrote:

> According to latest benchmarks (http://www.mersenne.org/bench.htm),
> AthlonXP seems to be slower than the Thunderbird.  Does anybody have a
> technical explanation ?

The Athlon XP lies about its speed. Remember the old Cyrix trick? 
Well AMD have gone for the same - pick a benchmark that suits 
you, then claim your chip is "1800+" if it runs _that_ benchmark a 
wee bit faster than the opposition's chip running at 1800 MHz.

I think an Athlon XP 1800+ is in fact running at 1533 MHz.

IMHO this is as bad a marketing scam as attatching the XP label, 
which is obviously designed to hoodwink consumers into thinking 
that an upgrade is neccessary if the want to run Windows XP - 
thus enabling AMD to benefit from Microsoft's imminent marketing 
blitz.

The unfortunate thing about this is that I'm afraid that most 
consumers will swallow at least part of the lies :(

Having said all that, AFAIK the basic core of the Athlon XP is much 
the same as the Thunderbird: it is fabbed at 0.13 microns instead 
of 0.18 microns, which means keeping it cool should be a bit 
easier. This should not affect the basic chip efficiency one way or 
the other. Running Prime95, I would expect an Athlon XP running 
at 1.4 GHz to be exactly the same speed as a Thunderbird 1.4GHz 
(266 MHz FSB) in the same board. N.B. you may need a BIOS 
upgrade to support the Athlon XP due to the changed core voltage.

There is one small architecture change between Thunderbird and 
Athlon XP. Athlon XP now supports Intel SSE operations - but still 
not SSE2 as used in the Pentium 4. This probably doesn't help 
Prime95 since prefetch, which _is_ relevant, was already supported 
by all variants of Athlon CPUs. But it might be worth telling a 
system running Prime95 on an Athlon XP that it's actually running 
on a Pentium III just in case that's any faster than the native Athlon 
code.

With the large multipliers used by current processors, the 
efficiency of the chipset and the memory have quite a large effect 
on the performance of a CPU. Moving a CPU between systems can 
therefore result in changes in performance as measured by 
benchmarks. You really need to check that the chipsets and the 
memory configurations are the same before you can compare 
benchmark timings in any meaningful way. Note particularly that 
e.g. 256 MB can be made up of one bank of 256 MBit, two banks 
of 128 MBit or four banks of 64 MBit RAM chips; expect a 
performance difference of 5% - 7% between these configurations 
even if the chip access speeds & timings are identical. More banks 
are faster.
> 
> Do we have to consider now Intel/P4 as the best platform (at least for
> prime95)?

For LL testing using mprime/Prime95:

That seems to have been the case for some months now; it's 
SSE2 which makes the difference. A P4 running Prime95 is well 
over twice as fast as a T'bird running at the same clock speed.

I would expect the same to be the case for any other application 
which has been specially coded to take advantage of SSE2. There 
probably are examples, especially in the area of multimedia, but I 
can't think of any offhand.

For general work (including trial factoring):

Any variant of Athlon will be significantly faster than P4 running at 
the same clock speed. At clock speeds up to 1.2 GHz, PIII may be 
a little faster than Athlon. Especially the new "Tualatin" PIII 
processor with 512KB cache.

However (given suitable software) other processors may be _much_ 
more efficient in terms of work done per clock cycle than any of the 
mainstream PC processors. PowerPC, Alpha and especially 
Itanium spring to mind. AFAIK none of these run above 1GHz yet, 
though they can be found as the basis of some impressively 
powerful workstations. 

AMD do have a point that raw CPU speed is no longer a good 
indicator of overall system performance, let alone the performance 
on a particular benchmark task, which can be further influenced by 
extraneous factors like the efficiency of the graphics card.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Minor milestone

2001-10-14 Thread bjb

Hi,

In case any one hasn't noticed:

Everyone in the PrimeNet Top 100 Producers list has now 
contributed more than 100 P90 CPU years.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: FW: Mersenne: Re: Factoring Failure?

2001-10-02 Thread bjb

On 1 Oct 2001, at 22:23, Jean-Yves Canart wrote:

> I have browsed some logs I archived long time ago and I have found
> this:
> 
> In may 1998, one user, "tomfakes", cleared around 80 exponents with
> factor found = "1" It was in the range 7013000-7055000.

Well, (s)he's not lying - 0 = n (mod 1) is a property of integers ;-)

In any event:

(a) this is the _opposite_ of the reported problem - what seems to 
have happened is that "no factor found" was being reported, 
sometimes erroneously;

(b) this won't get through now PrimeNet validates submitted 
factors; the code I wrote for this purpose rejects as garbage any 
single-digit factor, after stripping off any leading zeroes as well as 
white space.

(Obviously a Mersenne number with a prime exponent p > 5 cannot 
have any factors less than 10, and we know pretty much all there 
is to know about exponents up to and including 5, so excluding 
these is not a practical problem).

> > At 04:07 PM 9/30/2001 -0700, Daniel Swanson wrote:
> > >I went through the Cleared Exponents
> > >report looking for other examples of factors found during
> > double-checks that
> > >should have been found during the initial factorization.
> > >  5977297  53  DF6726544627832489
> > >  6019603  57  DF  137024179940485697
> > >  7019297  57  DF  160100125459121849
> > >  7020641  58  DF  226230108157229263
> > >  7025987  56  DF   74052063365823791
> > >  7027303  55  DF   31090234297428433
> > >10159613  56  DF   68279769831982367
> > >Were numbers in this range all originally factored by the same user
> > >or computer?
> >
> > My logfiles from that long ago have been zipped and stored on CDROM.
> > It is possible that 7,010,000 - 7,030,000 were all factored by one
> > person. It was not uncommon for me to hand out large blocks for
> > factoring to users without Internet connections.  While I no longer
> > do this, there are a handful of users pre-factoring the 20,000,000 -
> > 80,000,000 area.  I hope their machines are reliable!!  They
> > probably are as they are finding the expected number of factors.

The primes from that block of 20,000 numbers represents quite a 
bit of work and maps poorly onto the "missed" factors reported.

A few mistakes are inevitable but, since testing a factor takes of 
the order of a microsecond on current systems, hardware glitches 
shouldn't be much of a risk. (? Unless they get into the code 
stream used to generate potential factors?)  Reports of two 
"missed" factors of exponents within spitting distance of 6,000,000 
and no less than four just over 7,000,000 looks high for random 
glitches to be responsible, even on really ropy hardware. 
Remember that P-1 (which found the factors missed by trial 
factoring) can only find a small proportion of the "small" factors, 
especially when it's being run with "double checking" limits.
> >
> > Anyway, it doesn't appear to be a program bug as you were able to
> > find the factor with trial factoring.  I'm guessing either bad
> > hardware or an older prime95 version had a bug. 

If it _was_ Prime95. There are other factoring programs out there; 
maybe there was a higher incidence of use about 3.5 years ago 
when these exponents would have been the subject of factoring 
assignments.

> > Either way, GIMPS
> > has never considered missing a factor as a big deal.  It only means
> > some wasted effort running a LL test that could have been avoided.

True enough - though I'm concerned that the "no factors below 2^N" 
database may be seriously flawed, from the point of view of GIMPS 
it would seem to be a waste of time to go round redoing trial 
factoring just to fix this problem.

However if it could be established that all the "missed" factors 
reported were the work of one user, perhaps it would be worth fixing 
the database to force rerunning of trial factoring for those factoring 
assignments run by that user when the exponents are reassigned 
for double checking (or LL testing).


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: FW: Mersenne: Start at Bootup

2001-09-25 Thread bjb

Just my 2c worth:

Wouldn't it make more sense for XP users to run NTPrime now that 
George has released v21.4 in that format?

Surely you would then have one copy running irrespective of how 
many users happened to be "logged in" - even zero :) ?

I presume the Win XP problem is that "start at bootup" is in effect 
putting a shortcut into the user's own, or "all users" StartUp folder, 
so that it gets run for each user.

Regards
Brian Beesley

(who does not posess a copy of Win NT and has absolutely no 
intention whatsoever of "upgrading")

On 25 Sep 2001, at 13:31, Donald Rober wrote:
> 
> I would like to thank Lars for this important piece to my problem. I
> also set "Start at Bootup" on Windows 2000 Server and every user
> starts it when they log on.
> 
> Don
> -Original Message-
> I noticed a bad thing in WinXP regarding prime95.
> When you run prime95 as a user and mark "start at bootup" you get a
> quite peculiar situation. When I login prime95 starts as it should.
> Then if I log in as another user at the same time prime95 starts again
> in the new user, thus having two instances working on the same number
> and using the same files.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Service downtime

2001-09-20 Thread bjb

Hi,

Due to work on the electrical power supply to the building, the 
server lettuce.edsc.ulst.ac.uk which provides a mirror for the 
Mersenne database & software files will be offline from 15:00 GMT 
today Friday September 21st until 08:00 GMT Monday September 
24th.

I apologize for any inconvenience caused.


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Torture test passes but normal use fails!

2001-09-12 Thread bjb

On 12 Sep 2001, at 15:44, David Jaggard wrote:

> I have an overclocked AMD Duron machine using windows 98SE running at
> just under 1GHz and have been running prime95 v20 successfully for
> many months. I have occasionally had hardware errors when I have tried
> to overclock further but at the current speed prime95 v20 is happy.
> 
> Yesterday (11-09-2001) I upgraded prime95 from v20 to v21. My tests
> ressumed only to quickly fail with a SUMOUT/possible hardware failure.
> I made some small changes to speed and core voltage but could not
> correct the errors. I decided to run the torture test and I left it
> running all night. This morning I viewed results.txt and all tests had
> passed yet the normal operation of the program still fails. 

SUMOUT is often (not invariably, but often) caused by dodgy 
software. Sound card drivers for Windows 9x/ME are the most 
likely culprits. If this is the cause then the Good News is that your 
results are most likely OK. Note that a sound card driver problem 
_could_ be sensitive to Prime95 version as the driver could be 
bombing memory contents which are effectively unused in v20 but 
contain active data or code in v21. This might also explain why the 
torture test fails to reveal the problem - obviously there are _some_ 
minor differences between the code executed during torture testing 
and that used for production LL/DC testing.

> Surely if
> my system was unstable the torture test should fail! Is it possible
> that in the new release the normal primality tests are more stressful
> than the torture test? Or is there a bug?

Umm. Not sure about that. The torture test does not (cannot!) test 
every combination of operand values which can occur in "real" data.

Anyway if it's a driver problem then maybe it runs OK overnight 
because the device with the duff driver isn't being used.

If the sumouts occur at the same iteration number then you 
_could_ have found a bug caused by either a program error or a 
design flaw in the processor which occurs with very specific data. If 
they occur at random then it sounds much more like interference 
from a bad device driver or some other hardware problem - maybe 
overheating.

Did you try re-rerunning the specific selftest for the FFT runlength 
used by the exponent you're having a problem with? (The FFT 
runlength is the nearest "round" number to the size of the 
Pnnn savefile divided by 4096. To force a re-run, stop Prime95, 
edit local.ini removing the appropriate SelfTest=1 line & 
restart Prime95.)

If that works but the actual assignment keeps failing, it _might_ be 
a problem caused by specific data values. One way to _prove_ this 
would be to take the savefile to a different system & run it on that - 
if it's a data value problem that will fail too. Next try on a system 
with an Intel processor - if that fails it's a program problem, 
otherwise the CPU is doing something strange. I have a variety of 
reasonably reliable systems with different CPU types available if 
that's of any help.

If you suspect that it might be a driver problem, try upgrading the 
sound card driver (it's usually the sound card, or integrated 
Winmodem if you have one of those), next unloading the sound 
card driver, next physically removing the sound card. If it's a 
Winmodem problem then the system should be reliable so long as 
the modem lead is unplugged. In the event you find a driver problem 
is to blame, your options are (a) put up with it, (b) replace the 
driver, (c) replace the hardware for a unit with a "known safe" driver 
or (d) change your OS to Win NT/2000/XP, or linux, which will not 
suffer this sort of driver problem since they use properly partitioned 
memory.

To rule out overheating on your own system, I suggest you try:

(a) making sure that the cooling fan(s) in the case and/or power 
supply are working - on AMD Athlon/Duron systems you don't 
actually need to check the processor fan; the processor will die of 
extreme overheating in seconds if that fails! 

If your mainboard has a chipset cooling fan, check that is running, 
too.

(b) blowing any accumulated crud (compacted dust) out of the 
processor heatsink and away from the large chipsets on the 
mainboard and the memory;

(c) reducing the processor to its rated speed. If this clears the 
problem then I'm afraid you're maybe going to have to live with it.

BTW processors are designed for 4 to 7 years reliable operation at 
their rated speed. They will age more quickly when overclocked, 
mostly because of the higher temperatures generated by faster 
switching, and especially if the core voltage is increased. As the 
processor ages it will become increasingly less able to run reliably 
at clock speeds in excess of its rating.

Critical components on the mainboard, in particular the voltage 
regulator, age too. Some mainboards are known to have problems 
with capacitors used by the CPU voltage reg which are located 
close to the CPU socket; consequently they tend 

Re: Mersenne: Entropy of mersenne calculations.

2001-09-10 Thread bjb

On 10 Sep 2001, at 14:47, Gareth Randall wrote:
> 
> What is the entropic efficiency of our calculations?

Um. We're not even gaining information in the strict (information 
theory) sense, since all Mersenne primes existed long before 
anyone formulated the concept - yet we are certainly raising the 
entropy of the universe as a whole by our activities. In that sense, 
the entropic efficiency must be zero, though it's hard to see how 
any other computing activity could be any different.
> 
> Is it possible to say how much "work" must be performed in order to
> verify whether a number is prime? If it is, then how efficient are our
> methods in comparison? For instance, can it be shown that there is
> theoretically a better method, but one that no-one has discovered?

So far as Mersenne numbers are concerned (and some other types 
of numbers, e.g. Fermat numbers, numbers for which Proth's 
Theorem applies) there is AFAIK no theoretical work which would 
show that the tests we're using are anything other than optimal.

Theoretically there should be ways to improve primality tests for 
general numbers so that they are about as efficient as the LL test 
is for a number of the same approximate size. At present the best 
algorithms for general numbers (e.g. ECPP) are a great deal less 
efficient than this; the largest number proved prime using ECPP is 
about the hundredth root of 2^6972593-1 in magnitude.

Practical implementation of tests is a different matter. At present 
the best multiplication algorithms are O(n log n) - which would in 
itself be a huge surprise to anyone working in this field 50 years 
ago - yet it appears that there may be scope to improve on this. 
With suitable hardware (VVLW architectures) O(n) is possible.

Also, the possible advent of quantum computing may have a 
substantial effect on the choice of algorithm.
Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: RE: L2 Cache

2001-09-10 Thread bjb

On 10 Sep 2001, at 0:44, Stephan T. Lavavej wrote:

> So, will the Pentium 4 Northwood core (with 512KB L2 cache as
> compared to the current Willamette's 256KB L2 cache) perform
> even better on Prime95? Will there be a setting to tell Prime95
> that a Willamette or Northwood core is specifically being used
> (as opposed to just "Pentium 4")?

Well - there was no need for any change to the code when the PIII 
changed from 512KB L2 cache running at half speed (Klamath) to 
256KB L2 cache running at full speed (Coppermine), or when a 
similar change occurred in the Athlon line with the introduction of 
Thunderbird.

Neither is there any need for the code to differentiate between 
Celeron 2 and PIII chips - the essential difference here is that the 
C2 has only 128KB of L2 cache - or between Thunderbird and 
Duron in the Athlon line.

So I don't see any need for there to be anything to differentiate 
between different cache sizes on Pentium 4.

A larger L2 cache will do no harm, though with data set sizes 
much bigger than the L2 cache, it probably won't help much either. 
(What's the difference in Prime95 benchmarks between a Celeron 
800 and a PIII 800 (100 MHz FSB) in the same system?) 

Larger caches will benefit multiprocessor systems more than 
uniprocessor systems, _provided_ the work is allocated to 
processors using an efficient algorithm so that cache contents are 
not wasted by processing the data on another processor.

If there is a change in the instruction set that can be exploited, or a 
significant change in relative timings for particular instructions 
between one variant of a processor and another, then maybe it 
might be worth forking the code (again), but IMHO it's better to 
keep forks to the minimum for two basic reasons:

(a) reducing development & maintainance load;

(b) avoiding confusing users, as far as possible.

BTW the new "Tualatin" PIIIs built using 0.13 micron technology 
have 512KB L2 cache. I'm not sure whether they can be used in 
existing FP-CGA mobos - you're likely going to need at least a 
BIOS upgrade; there may be other electrical considerations e.g. 
provision of the lower voltage required by the Tualatin core may not 
be possible without changes to the voltage regulator hardware.
For the sake of anyone wishing to upgrade I hope the compatibility 
issues are resolved soon - Intel is already saying that the desktop 
PIII will be withdrawn from sale in early December!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: AW: Mersenne: Prime Net Server

2001-09-09 Thread bjb

On 10 Sep 2001, at 7:34, [EMAIL PROTECTED] wrote:

> I beg your pardon: you didn´t really expect to get informed about a
> planned outage BEFORE it happens or a crash AFTER it happened, did
> you?

Well, as a service operator myself (though without any 
responsibility to PrimeNet), I do try to warn users when a service is 
at unusual risk.

As for crashes, or unplanned outages for other reasons (like the 
power failing), I would try to post a notice explaining what 
happened as soon as I know myself. In the case of a system which 
fails on a Friday evening I do not think it is unreasonable that there 
is no information until Monday. Presumably the system support 
staff are not paid on a 24x7 basis.

BTW some people claim that the PrimeNet server is working, just 
very, very slowly. In that case it's more than likely that the server is 
sufferring a DoS attack :(
> 
> Such an outage didn´t occur for the first time in my (nearly) three
> years supporting GIMPS and others will follow.

Is that a statement, or a threat? 

BTW I do remember a few service breaks, though this one is the 
worst; in fact the service has been pretty reliable recently, at least 
until last Friday night.

> May be that´s one
> reason why GIMPS lost about 8.000 to 9.000 machines during the last
> six months.

Other reasons may apply:

(a) new projects which may be attractive to many people;

(b) the increasing size of our work units; processor speeds really 
haven't been keeping up!

(c) problems with utility (mains) power supply in some areas e.g. 
California discouraging people from running systems 24x7;

(d) adverse publicity involving distributed computing projects e.g. 
the sysadmin who installed RC5 clients on "his" systems without 
consent of his employer.

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: All Cunningham Wanted Numbers Factored

2001-09-08 Thread bjb

Hi,

Coincidentally to Peter's message, and as an encouragement to 
anyone else working in this field, it just so happens that one of my 
systems discovered a previously unknown (AFAIK) factor of one of 
the Cunningham Project numbers yesterday evening:

[Fri Sep 07 21:33:06 2001]
ECM found a factor in curve #199, stage #2
Sigma=6849154397631118, B1=300, B2=3.
UID: beejaybee/Simon2, P1136 has a factor: 
9168689293924594269435012699390053650369

Factors this size aren't too easy to find, but (despite the subject 
line in Peter's message) there are still a large number of 
candidates which need work done on them.

FWIW if you have a dual CPU system (mine is running 2 * PIII-850 
on a Supermicro P6DBE m/b), running ECM on small exponents 
on one processor makes an ideal foil to running LL tests on the 
other. The ECM process uses so little memory that it runs 
practically in the L2 cache, except during Stage 2. The LL test will 
slow down a bit during ECM stage 2, but that's only about 30% of 
the time - whereas running LL tests on both processors can slow 
both processes down quite substantially due to the loading on the 
memory bus.

BTW this was found with Prime95 v21.2, so I guess that's another 
check mark on the QA table: ECM _does_ find factors!


Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 10M digit prime testing question....

2001-08-16 Thread bjb

On 16 Aug 2001, at 23:58, Jeffrey Euclide wrote:

> Just curious of something here, since I've upgraded to a faster
> machine, 1.333GHz(1.47 oc'd), 512M DDR on a win2k platform if it would
> be prudent to take on factoring 10M digit primes, thanks ahead

Prudent? In what respect?

You're much more likely to find a prime by sticking to "ordinary" LL 
tests - but obviously much less likely to win $100K.

Even with that power, it's still going to take ~3 months per 
exponent to test 10M digit numbers. So it may be more "fun" to 
stick to "ordinary" LL tests.

Personally I'd run double-checks for a few weeks & check that the 
results are working into the lucas_v database. Overclocking 
Athlons is not easy, they overheat very readily when the FPU is 
driven hard (which Prime95 does). The fact that the system boots 
clean at 1.47 GHz (presumably 11 x 133 MHz), or even passes 
Memt25 at that speed, does not mean that Prime95 will be reliable 
with that much overclocking.

You can improve the reliability of Prime95 by running with roundoff 
checking enabled (Advanced menu) but this will cost about 15% in 
run speed (for Prime95 only). If you see no errors at all over a year 
or so, you're probably OK. However bear in mind that hardware 
glitches (often due to overclocking or overheating) can still creep 
through even with roundoff checking enabled.

If you find that Prime95 _is_ reliable at that speed, please tell me 
which CPU cooler unit you're using, and whether you can live with 
the fan noise - I've found that most of the better CPU coolers are 
rather loud :(
Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: mprime vs prime95

2001-08-08 Thread bjb

On 8 Aug 2001, at 9:44, Christian Goetz wrote:

> I changed the OS and the prime program with it. But now I am shocked!
> 
> With prime95 my comp needs .315 with mprime he needs .405 !!!
> 
> I think the improvements of prime95 should be included in mprime as
> well.

There is a pre-release of mprime v21 on the server - confusingly it's 
called p95v21x.zip. There is one problem you should be aware of if 
you're running on an Athlon. mprime v21a keeps resetting the CPU 
type to Pentium II so you lose the prefetch speed up. The trick is 
to use mprime -m to reset the CPU type each time you start the 
program. This should be fixed in the release version.

I find there is very little difference in the execution speed of 
Prime95 on Win 2000 and mprime on linux with kernel v2.2. linux 
with kernel v2.4 is about 1% faster, Prime95 on Win 9x is about 
1% slower.

Incidentally it is _possible_ to run Prime95 on linux using wine over 
X, but this is not an efficient solution!

Regards
Brian Beesley
_
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: [First 10 million digit exponent]

1999-06-08 Thread bjb

> > And below M3600, there are 159,975 exponents (again repeating Brian)
> > with at least 10 million digits.
> 
> Fine, but are the efforts being made in that region centrally registered?

Not to my knowledge - seems pointless since few programs currently 
available can cope with exponents in the 30 millions. Prime95 v19 is 
still some way off, the early pre-pre-pre-release I have (as part of 
the QA team) won't accept exponents that big for factoring 
assignments, though it can attempt LL tests!

Of course it's very quick to generate lists of prime numbers in the 
30 millions by sieving. I stopped at 36 million because I thought 
that would yield more than enough work to start with!

Out of interest, I also wrote a quick-and-dirty program to check for 
small (< 2^32) factors of Mersenne numbers, ran that against my list 
of primes & eliminated 28,468 of the 159,975 candidates. (A 15-minute 
run on a PII-350). I need to check these & will post the files on my 
ftp server when I've done so.

Regards
Brian Beesley

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: An example of inconsiderate CPU hogging

1999-06-05 Thread bjb

On 4 Jun 99, at 23:36, Peter Doherty wrote:

> However, one thing to note is that SETI @Home doesn't seem to be as "idle
> priority" as Prime95 is.  I ran it for a while, and I felt I could notice a
> slowdown, and when it wasn't minimized, there was a VERY noticeable
> slowdown.  I think SETI @Home actually steals more than just idle cycles,
> and gets a few more.

I believe the screensaver version (for Wintel) runs at priority 4, 
which is normal for screensavers, but the "detatched" version runs 
at normal application priority on all hosts. (Unless modified by 
"nice" on Unix systems)
> 
> >The message of the day on the Sun machines here at the University of
> >Michigan included the following today:
> >
> > * * * * *
> >Please do not run the Seti At Home program on the Login Servers. Although
> >it is for a good cause, the Login Servers do not have any spare CPU cycles
> >to donate. Running Seti At Home interferes with other users getting their
> >work done.
> > * * * * *

This says a lot. Presumably lots of people with accounts on the 
server are all trying to run Seti@Home. Though one instance might 
go unnoticed, a lot of instances certainly will use up memory (and 
other critical system resources e.g. process slots), as well as 
interfering with each other - and with whatever job the servers were 
installed to do.

> >This illustrates the importance of getting permission before running
> >processor-intensive programs on machines that are not entirely your own.

Agreed. Applies to mprime/NTPrime/Prime95 as well, even if they 
do have "better manners" wrt using system resources on multi-user 
hosts & servers.

> >There are only about thirty people logged in on the machine that I'm using
> >right now, and as usual almost all of them are running Pine; even with
> >this comparatively light load the slowdown was apparently bad enough to be
> >a problem. I suspect that people trying to run Seti on these machines at a
> >peak time of year would create a big performance drag, and force the
> >administrators to monitor individual users' processor usage more closely
> >to prevent such abuses. It is easy to forget about such consequences in
> >the quest for CPU time.

Probably the memory was getting used up, causing swapping, 
which really knocks the apparent performance of the system back. 
Or, if there were a lot of instances of the "offending" program, just 
having to wait in a long queue to get at the CPU can make things 
appear to be unacceptably slow. On a multi-user system, as the 
load rises, the response time rises about linearly initially, but 
steepens until eventually becoming essentially vertical. The "elbow" 
in the response time/load curve is very sharp, this is not a simple 
polynominal or exponential growth curve.

Actually it's very easy for system admins to "find" programs like 
this that are "abusing" the system & take suitable action (in line 
with local site regulations) to stop the abuse. But it obviously 
makes sense to ask the users not to be an unneccessary 
nuisance in the first place.

Another thing; it's sometimes incredible to users just how limited 
the CPU resources on central server systems can be. e.g. at the 
University of Ulster, all the staff (academic & non-academic) e-mail 
goes through one server running Novell Netware on one Pentium 
133. That's well over 1000 users ... It's (just about) adequate but 
there really _isn't_ a lot of horsepower to spare. And it would be 
impossible to justify an upgrade or replacement on the grounds of 
"insufficient CPU power" if "unneccessary" CPU-intensive tasks 
were found to be running on the system.

But the really crucial point is that there is absolutely no point in 
running more "CPU soak" programs on a system than the system 
has processors - if nothing else, the extra task switching between 
"CPU soakers"  wastes CPU cycles!

> >I'd like to think that GIMPS members, on the whole, do not deserve
> >warnings like the one above. Let's keep it that way.

Some will not need the warning, some (especially those who may 
not appreciate the complexities inherent in multiuser systems) 
almost certainly do. It's very easy, these days, to assume the only 
resources you're using are those of the box you're physically 
connected to via your fingertips.

Incidentally Seti@Home is a moderately heavy user of network 
resources, too. (Unlike the GIMPS programs, which are very light 
on network resources.) At University of Ulster we find this isn't a 
problem (though we most definitely do see signs of a shift in usage 
pattern since Seti@Home came on line). However it might possibly 
be at some sites.


Regards
Brian Beesley

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: New Mersenne Prime found!! Not yet verified

1999-06-02 Thread bjb

On 1 Jun 99, at 21:26, George Woltman wrote:

>   The 38th Mersenne prime was *probably* discovered today.  The exponent
> is in the 6,000,000s (the prime is in the neighborhood of 2,000,000 digits).
> The discoverer is a member of the top 200 contributors according to 
> http://www.mersenne.org/top.htm

Congratulations to the "unknown" discoverer, together with George 
and Scott!
> 
>   As was agreed after the last prime was discovered, I'm announcing
> the news to this mailing list immediately.  When the prime is properly 
> verified and published, then I'll announce the exact exponent.  This process
> was agreed to as a compromise in keeping everyone informed, yet minimizing
> any chance of the news prematurely leaking to the press.
> 
Yes - this approach is entirely sensible - though I guess the 
chance of a hardware or program error triggering a "false prime 
alert" is _extremely_ small, the fact that there is a substantial sum 
of money hanging on the result means that it is not impossible that 
some "clever clogs" could have "forged" a PrimeNet message 
block. Though I don't see what good it would do since independent 
verification was always going to be required.

Please note, I'm most definitely NOT suggesting in any way that 
the claim is bogus or fraudulent; I'm just trying to point out how 
careful we need to be before formally announcing a result as 
important as this.

>   And now the bad news.  Since the EFF award requires publication
> of the result in a refereed academic journal, the publication process
> will take longer than normal.  It could be a few months.

I don't see any particular problem in this. Once we have 
independent verification, there will be no doubt that the number 
really _is_ prime, and the fact should be easy to get past a referee -
unlike, e.g., a purported proof of the Generalised Riemann 
Hypothesis, which would require _very_ skilled and careful 
refereeing!.

Presumably a statement of fact in a news column in, e.g., "Nature" 
is sufficient, rather than a full-blown paper. For that, all that would 
appear to be needed is the number, the names of the discovery 
and verification teams, dates and possibly some brief details of the 
hardware & software used.

George - is DS doing the verification run again - or is this also 
"restricted information" at the moment?
Regards
Brian Beesley

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: New Prime...I wonder if the "islands" theorem will hold fast?

1999-06-02 Thread bjb

On 2 Jun 99, at 6:03, Aaron Blosser wrote:

> Once this new one is verified, it will be interesting to see if there is a
> prime either just below or just above it, to see if this elusive and highly
> unverified "island" theory sticks in this case or not.

I thought "Noll's Island conjecture" related to there being particular 
zones (exponents around k*q^n for some value k and for a value q 
close to 2) where Mersenne primes were more likely to be found. 
There might be 1, 2 or more Mersenne primes in a particular 
"island", or an expected island might be missing (e.g. the gap 
between 127 and 521), and there might be Mersenne primes in 
unexpected places, but nevertheless some clustering is "predicted".

At the moment we don't know enough. If the purported new 
Mersenne prime has an exponent close to 6 million, it fits the 
conjecture rather well; if it's closer to 7 million, the fit is less good, 
but it still adds weight. Conversely, the discovery of a Mersenne 
prime with an exponent around 4.5 million, or around 9 million, 
would do considerable damage to the statistical evidence for the 
conjecture.

In fact, although the "smooth distribution" model is hard to 
disprove, it would be actually be surprising if there weren't 
"clusters" of some sort in the distribution - even if we can't explain 
the underlying reasons. Although it doesn't seem to apply to 
Mersenne numbers in particular, there is an interesting treatment of 
"irregularities" in the distribution of prime numbers in chapter 3 of 
"Prime Numbers and Computer Methods for Factorization" by Hans 
Riesel (2nd ed) (Birkhauser, 1994) (ISBN 0-8176-3743-5, also 
3-7643-3643-5), e.g. the numbers 21,817,283,854,511,250 + p for p 
= 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59 and 61 are, very 
surprisingly, all prime!
> 
> There are still plenty of exponents to check below 6M, so there could very
> possibly be another undiscovered prime *less* than the one being examined
> right now, although the prize is awarded to the *first* megadigit prime
> *found*, not the first one above 1M digits... :-)  For that matter, there
> are still a few double-checks left to prove that M3021377 is the 37th
> Mersenne Prime, or that M2976221 is the 36th.  For all we know, there's
> another one lurking somewhere in the 2M range...

True enough. Though double-checking is approaching the 3 million 
range quite fast now.
> 
> While the prize money would be nice, I for one would think it cool enough
> just to find one, one reason I don't mind having even some fast machines
> doing double-check work.  I hope people aren't going to be discouraged just
> because there's no big juicy carrot on a stick in front of them after
> this...

Not me. Though maybe I'll switch one PII (of four) from primary 
tests to double checking, and maybe start a second P100 running 
Yves Gallot's proth program instead of double checks. (Actually I've 
been running proth on one P100 for about 3 months now, and have 
"discovered" two new prime numbers with >20,000 digits)

The point here is that there is no particular reason to suspect that 
the underlying distribution of Mersenne primes should be any 
different from those of numbers of the form k*2^n+1. But there are a 
lot more Proth numbers in the "quickly testable" range than there 
are Mersenne numbers, so we may be able to get the statistical 
data neccessary to (dis)prove the theory more quickly from testing 
Proth numbers than by sticking to Mersenne numbers.
Regards
Brian Beesley

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Repeating LL remainders

1999-05-25 Thread bjb

[EMAIL PROTECTED] writes:

>We can illustrate working modulo M23 = 8388607 = 47 * 178481.
>3 is a quadratic reside modulo 47 (e.g, 12^2 == 3 mod 47),
>so 2 + sqrt(3) is in the base field of order 47.
>The multiplicative order of 2 + sqrt(3) divides 47-1, and turns out to be 23.
>
>3 is a quadratic nonresidue modulo q = 178481.  The order of 2 + sqrt(3)
>divides q + 1 = 178482 = 2 * 3 * 151 * 197 and turns out to be 178482.
>
>The least common multiple of these orders is
>2 * 3 * 23 * 151 * 197.  So L(2 * 3 * 23 * 151 * 197) == L(0) mod M23.
>
>For the L sequence, we need two powers of 2 whose difference is
>divisible by 2 * 3 * 23 * 151 * 197.
>The orders of 2 modulo 3, 23, 151, 197 are 2, 11, 15, 196, respectively.
>The order of 2 modulo 3 * 23 * 151 * 197 is the least common multiple
>of these orders, namely 2^2 * 3 * 5 * 7^2 * 11 = 32340.
>To include a factor of 2, we need L(2^32341) == L(2^1) mod M23.
>That is, S(32341) == S(1) mod M23.
>This is far more than 23 LL steps before the sequence repeats.
>
>EXERCISE: Repeat this analysis modulo M11 = 23 * 89.
>  Find the period modulo M29 = 233 * 1103 * 2089, after getting
>  the individual periods for each prime factor.

Yes - the result for M29 is quite interesting due to the "unexpected" short
period of 60. (I feel after a week I'm not spoiling it for anyone by giving
the result away...)

Incidentally the first repeating remainder for _all_ the non-prime Mersenne
numbers with prime exponents is S(1) = 14.

I was interested to see what would happen for larger exponents, in particular
if the first repeating remainder would always be 14. Unfortunately this is
not true, the first repeating remainder for M37 is S(5) = S(516929). Finding
this took a fair bit of CPU time as, to start with, I more or less assumed
that the remainder would eventually go back to 14 (Peter's impeccable logic
seemed to suggest this to my inadequate maths!) so I had to run through
2^37-1 calculations to be sure that 14 never recurred, and even then I wasn't
sure I wasn't fighting a program bug.

Now I have a _much_ faster way of checking, assuming that the first repeating
remainder will be one of the first few in the (S) sequence. But is this true?
Instinctively one feels that it ought to be, but can one construct an example
where the the index n of the first repeating remainder S(n) is large compared
with the period? My math is totally inadequate to investigate this... (I'm
only interested in the case p prime, M(p) composite. Of course when M(p) is
prime then n=p and the period is 1)

Also, Peter's logic seems to depend on knowing the factors. Would it be
possible to "reverse engineer" at least one of the factors of M(p) (or at
least be in a much-improved position to find them) if one knew the repeat
period and/or the first repeating remainder in the S sequence for M(p)?
The reason I ask is that my "experimental" method of finding the period &
remainder does not depend on _any_ knowledge of the factorization of M(p),
and I reckon I could get the values for, say, M727 with the expenditure of
a great deal less computer time than completing the search to B1=44*10^6
using ECM would take.

Regards
Brian Beesley



Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm