Tim Peters <t...@python.org> added the comment:

I agree with everyone ;-) That is, my _experience_ matches Mark's:  as a 
more-or-less "numeric expert", I use Fraction in cases where it's already fast 
enough. Python isn't a CAS, and, e.g., in pure Python I'm not doing things like 
computing or composing power series with rational coefficients (but routinely 
do such stuff in a CAS). It's usually just combinatorial probabilities in 
relatively simple settings, and small-scale computations where float precision 
would be fine except I don't want to bother doing error analysis first to 
ensure things like catastrophic cancellation can't occur.

On the other hand, the proposed changes are bog standard optimizations for 
implementations of rationals, and can pay off very handsomely at relatively 
modest argument sizes.

So I'm +0. I don't really mind the slowdown for small arguments because, as 
above, I just don't use Fraction for extensive computation. But the other side 
of that is that I won't profit much from optimizations either, and while the 
optimizations for * and / are close to self-evident, those for + and - are much 
subtler. Code obscurity imposes ongoing costs of its own.

WRT which, I added Python's Karatsuba implementation and regret doing so. I 
don't want to take it out now (it's already in ;-) ), but it added quite a pile 
of delicate code to what _was_ a much easier module to grasp. People who need 
fast multiplication are still far better off using gmpy2 anyway (or fiddling 
Decimal precision - Stefan Krah implemented "advanced" multiplication schemes 
for that module).

----------
nosy: +tim.peters

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue43420>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to