In looking at Python bytecode over the years, I've noticed that it does very little to no traditional static-compiler optimization. Specifically:
* Dead code elmination (most of the time) * Jumps to Jumps or Jumps to the next instruction * Constant propagation (most of the time) * Common subexpression elimination * Tail recursion * Code hoisting/sinking Yes, over the years more compiler optimization has been done but it's still pretty sparse. The little that I've looked at pypy, it is pretty much the same thing at the Python bytecode level. That's why a python decompiler for say 2.7 will work largely unmodified for he corresponding pypy 2.7 version. Same is true 3.2 versus pypy 3.2. I understand though that pypy can do and probably does some of the optimization when it JITs. But my question is: if traditional compiler optimization were done would this hinder, be neutral, or help pypy's optimization. Of course, if there were static compiler optimization of the type described above, that might be a win when JITing doesn't kick in. (And perhaps then who cares) But I'm interested in the other situation where both are in play.
_______________________________________________ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev