On Fri, Jul 13, 2018 at 10:40 AM, Steven D'Aprano <st...@pearwood.info> wrote:
> On Fri, Jul 13, 2018 at 04:20:55AM +1000, Chris Angelico wrote:
>> On Fri, Jul 13, 2018 at 4:11 AM, Steven D'Aprano <st...@pearwood.info> wrote:
>> > There is no reason why primality testing can't be deterministic up to
>> > 2**64, and probabilistic with a ludicrously small chance of false
>> > positives beyond that. The implementation I use can be expected to fail
>> > on average once every 18 thousand years if you did nothing but test
>> > primes every millisecond of the day, 24 hours a day. That's good enough
>> > for most purposes :-)
>>
>> What about false negatives? Guaranteed none? The failure mode of the
>> function should, IMO, be a defined and documented aspect of it.
>
> If a non-deterministic primality tester such as (but not limited to)
> Miller-Rabin says a number is composite, it is definitely composite
> (ignoring bugs in the implementation, or hardware failures). If it says
> it is prime, in the worst case it is merely "probably prime", with an
> error rate proportional to 1/4**k where we can choose the k.
>
> (The bigger the k the more work may be done in the worst case.)

You can say that about algorithms easily enough. My point is that this
ought to be a constraint on the function - implementations may choose
other algorithms, but they MUST follow one pattern or the other,
meaning that a Python script can depend on it without knowing the
implementation. Like guaranteeing that list.sort() is stable without
stipulating the actual sort algo used.

ChrisA
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to