Lucas Wiman <[EMAIL PROTECTED]> wrote:
>Ok, it seems doubtful that we will ever be able to find a test more efficient
>than the LL test. The LL test involves p multiplications mod Mp, which is
>pretty impressive.
I think we need to look beyond today's technology before attempting to predict
such things. For example, consider quantum computation (QC). When this
becomes a reality (note I don't say "if" - that's also a conjecture), it will
revolutionize certain algorithms (e.g. factoring and list searching) which
involve correlating nonlocal data. With current "classical computational
models such tasks take longer as the number to be factored/list to be searched
gets longer because classical computers can perform only a limited (and
generally quite small) number of correlations (a.k.a. machine operations)
per clock cycle - increasing the clock frequency helps reduce the runtime,
but does not reduce the operation cost. To help with the latter we need
improved algorithms or (as with quantum computing) a fundamentally different
way of manipulating the data in question. The seemingly magical aspect of
QC is due to a fundamental property of quantum-mechanical wavefunctions,
namely nonlocality - the wavefunction amplitude of, say, an electron or
nuclear spin state may decrease exponentially with distance from the
"particle" in question, but it only truly reaches zero at infinity. This
is responsible for the quantum "spookiness" apparent in famous experiments
such as the double-slit or two-photon "teleportation" experiment. Now
nonlocality in itself is spooky - not even so much in the sense of something
pervading all of space as in the sense that even though the amplitude of
something becomes arbitrarily small, its influence need not - two photons
prepared in a correlated state remain correlated until the wavefunction of
one of them is "measured," at which time we can say with complete certainty
what the wavefunction of the other was, no matter how far they have become
separated in the interim - but if one accepts it as a fact (and an ever-
increasing number of experiments have done just that), then one begins to
understand the computational possibilities. For example, if some algorithm
involves correlating N bits in a given fashion, a quantum computer can do
this simultaneously for all the bits in question, because all of their
wavefunctions overlap. One is limited only by (i) the number of quantum
bits (qubits) one can reliably manipulate in the desired way, and (ii)
the "clock cycle," i.e. the length of time it takes for an individual
qubit to flip from one state to another. The first of these is currently
the more daunting due to the phenomenon of decoherence, which is what
happens when a qubit's wavefunction interacts with the noisy real world,
but researchers are coming up with all sorts of schemes to deal with this
problem, which only a few years ago seemed intractable. Whenever something
is shown to be physically possible and promises such a huge gain, human
ingenuity seems to find a way to make it a reality.
The Mersenne-related point is that QC might someday make nonfactorial
primality testing (e.g. LUcas-Lehmer for Mersennes) obsolete because it
would be quicker to directly search for all factors <= sqrt(number).
>However, I agree with your prediction because there
>are much more targeted types of numbers with similar running times (e.g.
>Proth numbers). As Chris Nash pointed out, a Proth test would have taken
>1/4th the time of the LL test on the Mersenne found in June on a number of
>precisely 1,000,000 digits.
Not true - that assumes one can test a non-Mersenne and a Mersenne number
of similar size in about equal time, whereas in reality, a Lucas-Lehmert
test of a p-bit Mersenne number (or Pepin test of a p-bit Fermat number)
will run several times faster than test of a p-bit Proth number of any
other form due to the fact that the former admit of a discrete-weighted-
transform (DWT) based arithmetic, which allows the need both for zero-
padding of the vector to be squared and for explicit modding at the end
of each squaring step. In practice this appears to allow a number of
given size to be modularly squared about 3-5 times faster than sans DWT.
There are two further reasons why non-DWT classes of numbers will be less
likely to yield a world-record-sized prime:
(1) volunteers for distributed computing efforts will always tend to find
the prospect of findng a world-record size something more interesting
than something arcane like 'the largest known irregular Huffergnu"ten
prime octuplet,' i.e. GIMPS will attract more compute cycles because
it holds the current prime record, and by a whopping margin; and
(2) Among the Proths, the fact there are so many possible kinds of
candidates dilutes the effort, i.e. makes it vastly more time-consuming
to test all the candidates up below any reasonably-sized threshold.
For these reasons (and for the fact that among the only other class of
numbers known to admit DWT arithmetic, the Fermats, there are vastly fewer
primality candidates than among the Mersennes), I predict that a Mersenne
prime will hold the record for a long time to come.
Cheers,
-Ernst
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers