On 2013-10-31 14:05, Chris Angelico wrote:
On Fri, Nov 1, 2013 at 12:17 AM, Alain Ketterlin
<al...@dpt-info.u-strasbg.fr> wrote:
"E.D.G." <edgrs...@ix.netcom.com> writes:
The calculation speed question just involves relatively simple
math such as multiplications and divisions and trig calculations such
as sin and tan etc.
These are not "simple" computations.
Any compiled language (Fortran, C, C++, typically) will probably go much
faster than any interpreted/bytecode-based language (like python or
perl, anything that does not use a jit).
Well, they may not be simple to do, but chances are you can push the
work down to the CPU/FPU on most modern hardware - that is, if you're
working with IEEE floating point, which I'm pretty sure CPython always
does; not sure about other Pythons. No need to actually calculate trig
functions unless you need arbitrary precision (and even then, I'd bet
the GMP libraries have that all sewn up for you). So the language
doesn't make a lot of difference.
Sure it does. Python boxes floats into a PyObject structure. Both Python and C
will ultimately implement the arithmetic of "a + b" with an FADD instruction,
but Python will do a bunch of pointer dereferencing, hash lookups, and function
calls before it gets down to that. All of that overhead typically outweighs the
floating point computations down at the bottom, even for the more expensive trig
functions.
This is where numpy comes in. If you can arrange your computation on arrays,
then only the arrays need to be unboxed once, then the rest of the arithmetic
happens in C.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
--
https://mail.python.org/mailman/listinfo/python-list