On Friday 16 January 2004 06:10, Max wrote:
>
> It would be also interesting to learn how often the first run is bad, and
> how often is the second?

Yes - I don't think this information is readily available, though sometimes 
you can infer the order of completion from the program version number.

To do the job properly either the "bad" database would need an extra field 
(date of submission) or a complete set of "cleared.txt" files would be 
required - and this would miss any results submitted manually.
>
> It seems to me that first run should be bad more often than the second. Is
> that true? My reasoning is that first run is usually done on modern
> (fast/overclocked/unstable/etc) hardware while the second one is done on
> the old/slow but more stable/trusted hardware.

Interesting theory - but surely the error rate would be expected to be 
proportional to the run length, which would tend to make fast hardware appear 
to be relatively more reliable - conversely smaller / lower power components 
(required to achieve high speed) would be more subject to quantum tunnelling 
errors. For those who think in terms of cosmic rays, this means a less 
energetic particle hit will be enough to flip the state of a bit.

In any case the exponents ~10,000,000 which are being double checked now were 
originally tested on "leading edge" hardware about 4 years ago, when 
overclocking was by no means unknown but was often done without the sort of 
sophisticated cooling which is readily available these days.

Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to