On Fri, 14 May 2021 at 19:41, Paul Moore <p.f.mo...@gmail.com> wrote:
>
> On Fri, 14 May 2021 at 16:54, Martin Teichmann
> <martin.teichm...@gmail.com> wrote:
> > That is absolutely what I would like to have. The fractions module is very 
> > small, it can easily be made a builtin. This would also speed it up 
> > significantly, I hope. Probably close to the speed of floats, given that 
> > most of the time spent for floats is whithin the interpreter anyways.

Exact rational arithmetic is much slower than floating point because
of the gcd calculations and because of bit growth. Even with CPython
interpreter overhead and a highly optimised C implementation
calculations with simple rationals would still be noticeably slower
than float. That's with simple rationals though: once you start doing
arithmetic with rationals the bitcount typically grows and the
computational cost of elementary operations is unbounded if there is
no limit on the bit count.

As an example consider something like Gaussian elimination which is a
basic algorithm for solving systems of equations or for inverting
matrices etc. The algorithm is O(n^3) in the number of elementary
add/multiply operations and if you write that algorithm for
fixed-width floating point and time it with random matrices then you
will be able to see the n^3 performance. When you write the same
algorithm using exact rationals the complexity in terms of bit
operations can grow exponentially. Leaving aside rationals, just
applying Gaussian elimination to a matrix of exact integers can be
done with modified algorithms but those are still O(n^5) because of
bit growth. This is the basic reason why floating point is more
commonly used than exact rational arithmetic: it provides bounded
computational cost for elementary operations. For most numerical
applications it is better to have bounded computation in exchange for
all of the other problems that rounding errors and inexact arithmetic
lead to.

> Builtins have to be in C, with very few exceptions. That makes it
> harder for alternative implementations, who now have to write their
> own implementation rather than just grabbing the pure Python stdlib
> version. It also makes maintenance harder, and means that bugs take
> longer to get fixed, as fewer people know how to maintain the code.

Yes, but having a faster fraction type would be great. SymPy doesn't
actually use the fractions module because it's too slow. Instead SymPy
has its own pure Python implementation that is a little faster and
will use gmpy2's mpq type if gmpy2 is installed. The mpq type is
something like 30x faster than Fraction. The fmpq type from
python_flint is around twice as fast again and is particularly fast
for small rationals. I'm referring to the time difference for small
fractions but more efficient integer implementations used in these
libraries mean that the speed difference increases for larger bit
counts. For some operations in SymPy this speed ratio translates
directly to the time taken for a slow user-level calculation.

The increased inherent cost of rational arithmetic actually means that
with an efficient implementation of a fraction class pure Python code
can compete for speed with code written in C. Maybe SymPy is a
specialist library that should use other specialist libraries for
faster integer and rational arithmetic but to me the Fraction class is
clearly the kind of thing that should be on the C side of the
implementation. The current Fraction class is great for basic use but
too slow for larger calculations.

--
Oscar
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/PUOJFSZR5UFTVRBUEY7RV37MT5VFNM3V/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to