Hi all, thanks for the advice. I tried what Armin proposed and like to share my results with you:
https://bitbucket.org/amintos/pypy/commits/937254cbc554adfb748e3b5eeb44bf765d204b9d?at=default Keeping in mind what Steve and Maciej pointed out, I restricted the optimization to floats that are "normal" powers of two. I thought about checking for infinity, but I could not come up with a scenario where 'x * (-)0.0' differs from 'x / (-)inf'. I haven't done exhaustive tests yet, but some of the code where I first discovered the issue runs a little faster now. Comments and possibly missed corner cases are welcome (IEEE-754 can be a minefield sometimes). Thanks, Toni Am 06.11.2014 um 10:07 schrieb Armin Rigo: > Hi Toni, > > On 6 November 2014 10:00, Armin Rigo <ar...@tunes.org> wrote: >> gcc seems to perform this optimization for divide-by-constant where >> the constant is exactly a finite power of two that is not a denormal. >> These are the cases where the result is exactly the same. We could do >> it too. > > In short, what is needed is: > > - first check that the optimization you want to do is exact; trying it > out on "gcc -O2 -S" without any "-ffast-math" flags is a good way to > know. > > - if it is, then it's a matter of writing some simple code in > rpython/jit/metainterp/optimizeopt/rewrite.py. Search for "float_mul" > here; it will turn for example "f0 * -1.0" into a "float_neg" > operation, with the comment that it is an exact optimization. > > - don't forget, start by adding a test to test/test_optimizebasic.py > (search for "float_mul(-1.0, f0)" and add it nearby). > > You might find out that hacking the PyPy JIT at this level is rather easy :-) > > > A bientôt, > > Armin. > _______________________________________________ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev