On 20 Feb 2001, at 22:17, tom ehlert wrote:

> I personally don't mind a P100 running full LL tests; they are a small, but
> usefull contribution to our effort.
> [... snip ...]
> even a P100, runnining 8 hours a day, will finish M12000000 within 2 years,
> 8 days; I will happily wait for it; the gap between our 12.xx and 5.xx is
> larger as the surrent progress (on the front) is roughly 2.500.000/year.

True enough. There are a few issues here:

(a) most users don't seem prepared to wait two years to complete a 
run - even two months is asking a bit much of relatively new users;

(b) a working hypothesis is that random glitches causing runs to go 
wrong happen at a rate which is independent of the system speed - so 
that a given assignment is more likely to give the correct final 
residue if it is run on a fast system rather than a slow one. When 
we're talking about assignments taking years to complete, the chance 
of a random glitch spoiling the result seems to be very real;

(c) so far as PC systems are concerned, the cost of the power used to 
complete an assignment (together with the related environmental 
damage in terms of carbon emissions, if you're worried about global 
warming) is wholly dependent on the run time. Depending on how much 
you pay for utility power, there comes a point where it's actually 
cheaper in the long term to buy new hardware but run it for less 
hours in order to achieve the same processing rate.

Personally I don't _mind_ P100s running LL tests - but I'd rather 
they did double-checks, or factoring, and left the bigger jobs to the 
faster processors.

> here's my proposal:
> on the server side, set a limit (currently for LL=11.000.000,      
> DC=5.000.000)
> 
> define a RPTM= 'reasonable performing, trusted maschine' as a      
> maschine,
> which has at least returned 2 results (scince last syncronization?)
> 
> then: if a maschine asks for new work:
> 
> if its 'reliable and reasonable fast' (has returned two results)
>       give it the smallest exponent available
> else
>       give it some expomnent above limit.
> 

I think this proposal would (a) deter new users and (b) dry up the 
supply of smaller exponents more quickly than the current scheme, 
thereby making the "slow processor problem" worse than it is at 
present.

To implement this scheme, we'd need to change the decision point from 
the user to the server, since the server would have to keep track of 
which systems are "RPTM". At present the user client asks the server 
for a particular type of assignment. I think most people prefer the 
element of free choice implicit in the present method.


Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to