Maciej Fijalkowski, 17.10.2011 09:34:
On Mon, Oct 17, 2011 at 12:10 AM, Armin Rigo wrote:
On Sun, Oct 16, 2011 at 23:41, David Cournapeau wrote:
Interesting to know. But then, wouldn't this limit the speed gains to
be expected from the JIT ?
Yes, to some extent. It cannot give you the last bit of performance
improvements you could expect from arithmetic optimizations, but (as
usual) you get already the several-times improvements of e.g. removing
the boxing and unboxing of float objects. Personally I'm wary of
going down that path, because it means that the results we get could
suddenly change their least significant digit(s) when the JIT kicks
in. At least there are multiple tests in the standard Python test
suite that would fail because of that.
The thing is that as with python there are scenarios where we can
optimize a lot (like you said by doing type specialization or folding
array operations or using multithreading based on runtime decisions)
where we don't have to squeeze the last 2% of performance. This is the
approach that worked great for optimizing Python so far (concentrate
on the larger picture).
That's what I meant. It's not surprising that a JIT compiler can be faster
than an interpreter, and it's not surprising that it can optimise generic
code into several times faster specialised code. That's what JIT compilers
are there for, and PyPy does a really good job at that.
It's much harder to reach up to the performance of specialised, hand tuned
code, though. And there is a lot of specialised, hand tuned code in SciPy
and Sage, for example. That's a different kind of game than the "running
generic Python code faster than CPython" business, however worthy that is
by itself.
Stefan
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
http://mail.python.org/mailman/listinfo/pypy-dev