Hi, On Sun, Oct 16, 2011 at 23:41, David Cournapeau <courn...@gmail.com> wrote: > Interesting to know. But then, wouldn't this limit the speed gains to > be expected from the JIT ?
Yes, to some extent. It cannot give you the last bit of performance improvements you could expect from arithmetic optimizations, but (as usual) you get already the several-times improvements of e.g. removing the boxing and unboxing of float objects. Personally I'm wary of going down that path, because it means that the results we get could suddenly change their least significant digit(s) when the JIT kicks in. At least there are multiple tests in the standard Python test suite that would fail because of that. > And I am not sure I understand how you can "not go there" if you want > to vectorize code to use SIMD instruction sets ? I'll leave fijal to answer this question in detail :-) I suppose that the goal is first to use SIMD when explicitly requested in the RPython source, in the numpy code that operate on matrices; and not do the harder job of automatically unrolling and SIMD-ing loops containing Python float operations. But even the later could be done without giving up on the idea that all Python operations should be present in a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit floats). A bientôt, Armin. _______________________________________________ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev