Stefan Struiker wrote (re. the reliability of double-checking):

> How many DDs does it take to "reasonably establish" primality, given the
> apparent slight error in the L-L test?  Is there an extra-precision
> version of the test to nail things down?

A few years ago, you would likely have heard quite a few people say
that such a result could only be acceptable if confirmed via an all-
integer test, i.e. one devoid of roundoff error.

Now that the reliability of carefully written floating-point-based
codes for primality testing has been established to most everyone's
satisfaction, the currently accepted standard is two independent
tests (floating-point or not) running on separate hardware, preferably
using different programs. George has pretty much obviated the latter
criterion by using a binary-shifted start value for the LL test, which
means that even running the same code on the same exponent and same
hardware twice in a row will involve completely different FFT vectors
(but which have digits of similar magnitudes)
p-1 out of p times; in case one really needs to convince the remaining
critics (e.g. if the first test indicates primality), one can simply
make sure the double-check is done using an independent program.

Nathan Russell wrote:

> IIRC, there is no real difference in the double-check and the
> first-time test.

On average 1 in p times there is not, for the reasons I cited above.
Note that this also applies to cases where Prime95 was used for one
run and a different code for the other.

An interesting sidelight to the all-integer vs. floating-point debate:
all-integer is generally thought to be more reliable because it suffers
no roundoff (RO) errors, but it's become quite clear that RO errors
are not the major culprit in erroneous runs, and moreover can be
detected quite reliably. It's the other system components (especially
on PC-class hardware, where profit margins are thin to nonexistent and
corners must be cut) that more often cause trouble; non-ECC memory,
CPU overheating, unexpected (well, perhaps not under Windows :)
interactions between various applications, actual flaws in the CPU
or motherboard, etc. All of these can ruin a calculation, whether
that be floating-point or all-integer.

The other thing that is often overlooked is that even an all-integer
run can suffer a non-hardware-related kind of error, namely of the
overflow variety. Unlike RO error, this can be difficult (or at the
very least expensive) to detect, especially for things like a modern
Mersenne-testing algorithm, which never explicitly finds the unmodded
square of the input number, i.e. does not permit a casting-out-nines-
style checksum. Some will say, "why not just enable overflow trapping?",
but most performance-oriented CPUs have no hardware support for such
trapping, because it is inimical to speed. Instead, one must trap in
software, using something like the following pseudocode sequence:

             unsigned x, y, z       !* x, y, z are register-length ints
             logical of_trap
             z = x + y              !* the sum may overflow...
             of_trap = (z < x)      !* ...in which case we detect it via
                                    !* unsigned compare of z with x or y.

..which of course slows things down considerably.

Cheers,
-Ernst


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to