Tomasz Maćkowiak added the comment:

untokenize has also some other problems, especially when it is using compat - 
it will skip first significant token, if ENCODING token is not present in input.

For example for input like this (code simplified):
>>> tokens = tokenize(b"1 + 2")
>>> untokenize(tokens[1:])
'+2 '

It also doesn't adhere to another documentation item:
"The iterable must return sequences with at least two elements. [...] Any 
additional sequence elements are ignored."

In current implementation sequences can be either 2 or 5 elements long, and in 
the 5-elements long variant the last 3 elements are not ignored, but used to 
construct source code with original whitespace.

I'm trying to prepare a patch for those issues.

----------
nosy: +kurazu

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue16223>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to