[issue26369] doc for unicode.decode and str.encode is unnecessarily confusing

2016-02-19 Thread Terry J. Reedy
Terry J. Reedy added the comment: The intended use for str.encode is for same-type transcoding, like this: I was unaware of the seemingly useless behavior you quote. >>> 'abc'.encode('base64') 'YWJj\n' >>> 'YWJj\n'.decode('base64') 'abc' Here is a similar use for unicode.decode. >>>

[issue26369] doc for unicode.decode and str.encode is unnecessarily confusing

2016-02-16 Thread Ezio Melotti
Changes by Ezio Melotti : -- nosy: +ezio.melotti ___ Python tracker ___ ___

[issue26369] doc for unicode.decode and str.encode is unnecessarily confusing

2016-02-16 Thread Steven D'Aprano
Steven D'Aprano added the comment: Perhaps you could suggest a specific change to the docstrings for str.encode and unicode.decode? (BTW, I presume you are aware that the equivalent of (bytes)str.encode and unicode.decode are gone in Python 3?) -- nosy: +steven.daprano

[issue26369] doc for unicode.decode and str.encode is unnecessarily confusing

2016-02-16 Thread Ben Spiller
New submission from Ben Spiller: It's well known that lots of people struggle writing correct programs using non-ascii strings in python 2.x, but I think one of the main reasons for this could be very easily fixed with a small addition to the documentation for str.encode and unicode.decode,