Pablo Galindo Salgado <pablog...@gmail.com> added the comment:

I am with Lysandros and Batuhan. The parser is considerably coupled with the C 
tokenizer and the only way to reuse *the parser* is to make flexible enough to 
receive a token stream of Python objects as input and that can not only have a 
performance impact on normal parsing but also raises the complexity of this 
task considerably, especially taking into account that the use case is quite 
restricted and is something that you can already achieve by transforming the 
token stream into text and using ast.parse.

There is a considerable tension on exposed parts of the compiler pipeline for 
introspection and other capabilities and our ability to do optimizations. Given 
how painful it has been in the past to deal with this, my view is to avoid 
exposing as much as possible anything in the compiler pipeline, so we don't 
shoot ourselves in the foot in the future if we need to change stuff around.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue42729>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to