https://github.com/python/cpython/commit/f9aac4a76d68c8eef78058b4da1252beaadd1570 commit: f9aac4a76d68c8eef78058b4da1252beaadd1570 branch: 3.12 author: Miss Islington (bot) <[email protected]> committer: erlend-aasland <[email protected]> date: 2025-01-06T08:59:24Z summary:
[3.12] gh-128519: Align the docstring of untokenize() to match the docs (GH-128521) (#128532) (cherry picked from commit aef52ca8b334ff90e8032da39f4d06e7b5130eb9) Co-authored-by: Tomas R <[email protected]> files: M Lib/tokenize.py diff --git a/Lib/tokenize.py b/Lib/tokenize.py index b2dff8e6967094..553c1ca4388974 100644 --- a/Lib/tokenize.py +++ b/Lib/tokenize.py @@ -320,16 +320,10 @@ def untokenize(iterable): with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. - Round-trip invariant for full input: - Untokenized source will match input source exactly - - Round-trip invariant for limited input: - # Output bytes will tokenize back to the input - t1 = [tok[:2] for tok in tokenize(f.readline)] - newcode = untokenize(t1) - readline = BytesIO(newcode).readline - t2 = [tok[:2] for tok in tokenize(readline)] - assert t1 == t2 + The result is guaranteed to tokenize back to match the input so + that the conversion is lossless and round-trips are assured. + The guarantee applies only to the token type and token string as + the spacing between tokens (column positions) may change. """ ut = Untokenizer() out = ut.untokenize(iterable) _______________________________________________ Python-checkins mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3/lists/python-checkins.python.org/ Member address: [email protected]
