Eric V. Smith <e...@trueblade.com> added the comment:
I'll see if I can dig up the patch today. If I can find it, I'll attach it to issue 34364. This is really the first time I've tried to write down all of the issues related to tokenizing f-strings. It does seem a little daunting, but I'm not done noodling it through. At first blush it looks like the tokenizer would need to remember if it's inside an f-string or not and switch to different rules if so. Which doesn't exactly describe your average tokenizer, but I'm not sure how Python's tokenizer would need to be changed to deal with it, or how messy that change would be. I should probably write an informational PEP about parsing f-strings. And I should include the reason I went with the "just a regular string which is later hand-parsed" approach: at the time, f-strings were a controversial topic (there were any number of reddit threads predicting doom and gloom if they were added). By parsing them as just regular strings with one simple added string prefix, it allowed existing tooling (editors, syntax highlighters, etc.) to easily skip over them just by recognizing 'f' as an additional string prefix. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue39564> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com