Ah ok thanks for the tip, I re-posted this as
https://discuss.python.org/t/contributing-the-pyston-jit/24195

On Thu, Feb 23, 2023 at 6:02 PM Brett Cannon <br...@python.org> wrote:

> FYI you will probably get more engagement if you posted this to
> discuss.python.org .
>
> On Thu, Feb 23, 2023, 10:18 Kevin Modzelewski <kev...@gmail.com> wrote:
>
>> Hello all, we on the Pyston team would like to propose the contribution
>> of our JIT
>> <https://github.com/pyston/pyston/blob/pyston_main/Python/aot_ceval_jit.c> 
>> into
>> CPython main. We're interested in some initial feedback on this idea before
>> putting in the work to rebase the jit to 3.12 for a PEP and more formal
>> discussion.
>>
>> Our jit is designed to be simple and to generate code quickly, so we
>> believe it's a good point on the design tradeoff curve for potential
>> inclusion. The runtime behavior is intentionally kept almost completely the
>> same as the interpreter, just lowered to machine code and with
>> optimizations applied.
>>
>> Our jit currently targets Python 3.7-3.10, and on 3.8 it achieves a 10%
>> speedup on macrobenchmarks (similar to 3.11). It's hard to estimate the
>> potential speedup of our jit rebased onto 3.12 because there is overlap
>> between what our jit does and the optimizations that have gone into the
>> interpreter since 3.8, but there are several optimizations that would be
>> additive with the current performance work:
>> - Eliminating bytecode dispatch overhead
>> - Mostly-eliminating stack management overhead
>> - Reducing the number of reference count operations in the interpreter
>> - Faster function calls, particularly of C functions
>> - More specialization opportunities, both because a jit is not limited by
>> bytecode limits, but also because it is able to do dynamic specializations
>> that are not possible in an interpreter context
>>
>> There is also room for more optimizations -- in Pyston we've co-optimized
>> the interpreter+jit combination such as by doing more extensive profiling
>> in the interpreter. Our plan would be to submit an initial version that
>> does not contain these optimizations in order to minimize the diff, and add
>> them later.
>>
>> Our jit uses the DynASM assembler library (part of LuaJIT) to generate
>> machine code. Our jit currently supports Mac and Linux, 64-bit ARM and
>> x86_64. Now that we have two architectures supported, adding additional
>> ones is not too much work.
>>
>> We think that our jit fits nicely in the technical roadmap of the Faster
>> CPython project, but conflicts with their plan to build a new custom
>> tracing jit.
>>
>>
>> As mentioned, we'd love to get feedback about the overall appetite for
>> including a jit in CPython!
>>
>> kmod
>> _______________________________________________
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/7HTSM36GBTP4H5HEUV4JMCDSVYVFFGGV/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TUTSBGG7D7HW6MFHVX46IQDWAF3MJJLS/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to