[issue35297] untokenize documentation is not correct

2019-10-16 Thread Zachary McCord


Zachary McCord  added the comment:

I think anyone using the tokenize module to programmatically edit python source 
wants to use and probably does use the undocumented behavior, which should then 
be documented.

I ran into this issue because for me this manifested as a crash:

$ python3
>>> import tokenize
>>> tokenize.untokenize([(tokenize.STRING, "''", (1, 0), (1, 0), None)])
"''"
>>> tokenize.untokenize([(tokenize.STRING, "''", None, None, None)])
Traceback (most recent call last):
  File "", line 1, in 
  File "//virtualenv/lib/python3.6/tokenize.py", line 338, in untokenize
out = ut.untokenize(iterable)
  File "//virtualenv/lib/python3.6/tokenize.py", line 272, in untokenize
self.add_whitespace(start)
  File "//virtualenv/lib/python3.6/tokenize.py", line 231, in 
add_whitespace
row, col = start
TypeError: 'NoneType' object is not iterable

The second call is giving untokenize() input that is documented to be valid, 
yet which causes a crash.

--
nosy: +Zachary McCord

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35297] untokenize documentation is not correct

2019-04-18 Thread Utkarsh Gupta


Utkarsh Gupta  added the comment:

I am not sure if that's a documentation problem, is it?
If so, I'll be happy to send a PR :)

--
nosy: +utkarsh2102

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35297] untokenize documentation is not correct

2019-04-16 Thread Caleb Donovick


Change by Caleb Donovick :


--
nosy: +donovick

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35297] untokenize documentation is not correct

2018-11-22 Thread Zsolt Cserna


Change by Zsolt Cserna :


--
versions: +Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35297] untokenize documentation is not correct

2018-11-22 Thread Zsolt Cserna


New submission from Zsolt Cserna :

untokenize documentation 
(https://docs.python.org/3/library/tokenize.html#tokenize.untokenize) states 
the following:

"""
Converts tokens back into Python source code. The iterable must return 
sequences with at least two elements, the token type and the token string. Any 
additional sequence elements are ignored.
"""

This last sentence is clearly not true because here:
https://github.com/python/cpython/blob/master/Lib/tokenize.py#L242

The code checks for the length of the input token there, and the code behaves 
differently, in terms of whitespace, when an iterator of 2-tuples are given and 
when there are more elements in the tuple. When there are more elements in the 
tuple, the function renders whitespaces as the same as they were present in the 
original source.

So this code:
tokenize.untokenize(tokenize.tokenize(source.readline))

And this:
tokenize.untokenize([x[:2] for x in tokenize.tokenize(source.readline)])

Have different results.

I don't know that it is a documentation issue  or a bug in the module itself, 
so I created this bugreport to seek for assistance in this regard.

--
assignee: docs@python
components: Documentation
messages: 330281
nosy: csernazs, docs@python
priority: normal
severity: normal
status: open
title: untokenize documentation is not correct
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com