Am 29.04.21 um 08:54 schrieb elas tica:
Le mercredi 28 avril 2021 à 17:36:32 UTC+2, Chris Angelico a écrit :

In what sense of the word "token" are you asking? The parser? You can
play around with the low-level tokenizer with the aptly-named
tokenizer module.

It was a good suggestion, and the PLR doesn't mention the tokeniser module. It 
should, this goes very well with the conversional style it has.



# --------------
from tokenize import tokenize
from io import BytesIO

s="""42 not\
  in [42]"""
g = tokenize(BytesIO(s.encode('utf-8')).readline)
print(*(g), sep='\n')
# --------------

the docs are wrong when they say:

......................................
using a backslash). A backslash is illegal elsewhere on a line outside a string 
literal.
......................................


You're not passing a backslash. Try print(s).
It would be different with a raw string

s=r"""42 not\
   in [42]"""

        Christian

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to