Tim Peters added the comment:

Serhiy, I don't know what you're thinking there, and the code doesn't make much 
sense to me.  For example, consider n=2.  Then m == n, so you accept the 
initial `g = x**(1.0/n)` guess.  But, as I said, there are cases where that 
doesn't give the best result, while the other algorithms do.  For example, on 
this box:

>>> serhiy(7.073208563506701e+46, 2)
2.6595504438733062e+23
>>> pow(7.073208563506701e+46, 0.5)  # exactly the same as your code
2.6595504438733062e+23

>>> nroot(7.073208563506701e+46, 2)  # best possible result
2.6595504438733066e+23
>>> import math
>>> math.sqrt(7.073208563506701e+46) # also achieved by sqrt()
2.6595504438733066e+23

On general principle, you can't expect to do better than plain pow() unless you 
do _something_ that gets the effect of using more than 53 mantissa bits - 
unless the platform pow() is truly horrible.  Using pow() multiple times is 
useless; doing Newton steps _in_ native float (C double) precision is useless; 
even some form of "binary search" is useless because just computing g**n (in 
native precision) suffers rounding errors worse than pow() suffered to begin 
with.

So long as you stick to native precision, you're fighting rounding errors at 
least as bad as the initial rounding errors you're trying to correct.  There's 
no a priori reason to even hope iterating will converge.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue27761>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to