On Tue, Jan 14, 2014 at 10:16:17AM -0800, Guido van Rossum wrote: > Hm. It is beginning to sound more and more flawed. I also worry that > it will bring back the nightmare of data-dependent UnicodeError back. > E.g. this (from tests/basic.py): > > def test_asciistr_will_not_accept_codepoints_above_127(self): > self.assertRaises(ValueError, asciistr, 'Schrödinger') > > looks reasonable enough when you assume asciistr() is always used with > a literal as argument -- but I suspect that plenty of people would > misunderstand its purpose and write asciistr(s) as a "clever" way to > turn a string into something that's compatible with both bytes and > strings... :-(
I am one of those people. I've been trying to keep on top of this enormous multiple-thread discussion, and although I haven't read every single post in its entirety, I thought I understand the purpose of asciistr was exactly that, to produce something that was compatible with both bytes and strings. -- Steven _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com