[Tim] > > Steven's numbers are pretty baffling to me, since these are all composite > > and so iterating Miller-Rabin "should get out" pretty fast: >
[Steven] > That's because you haven't seen my code :-) > I'd be baffled anyway ;-) about 4.7 seconds to test 2**800 + 1; > I got 1.71 msec, over a thousand times faster. > less than a tenth of a millisecond to test 2**900 + 1; > I got 2.32 msec, over 20 times _slower_(!). and about 8.6 seconds to test 2**1000 + 1. And I got 3.14 msec, again over a thousand times faster. It's over-engineered, class-based, and written as a learning exercise. > There's a compatibility layer so it will work back to Python 2.4 and an > instrumentation layer which I inserted in a fit of enthusiasm to gather > staticistics, which I have never once looked at since. > While my code had no fluff at all. It's hard to believe you could add enough cruft to slow it down by a factor of 1000, but pretty much impossible to believe adding cruft could make it way faster in the middle case above. Or do you first try division by small primes? 2**900+1 could get out cheap in that case, because it happens to be divisible by 17. > And *even so* it is still fast enough for casual use at the interactive > interpreter, compared to more naive algorithms with worse Big O > performance characteristics. > > You might think 5 seconds is slow, but I'm serious when I say some of > the other algorithms I played with would take *days* to generate, > or check, largish primes. > No argument from me. There's a reason I wrote my `pp()` to begin with too -) State-of-the-art code for these things requires planet-sized brains and commitment, but even a little expert knowledge can yield simple code that's an enormous improvement over any "dead obvious" approach.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/