On 07/15/18 16:55, Steven D'Aprano wrote:
On Sun, 15 Jul 2018 11:22:11 -0700, James Lee wrote:

On 7/15/2018 3:43 AM, Steven D'Aprano wrote:
No. The real ten billion dollar question is how people in 2018 can
stick their head in the sand and take seriously the position that
Latin-1 (let alone ASCII) is enough for text strings.



Easy - for many people, 90% of the Python code they write is not
intended for world-wide distribution, let alone use.
But they're not making claims about what works for *them*. If they did,
I'd say "Okay, that works for you. Sorry you got left behind by
progress." They're making grand sweeping claims about what works best for
a language intended to be used by *everyone*.

"Intended to be used by *everyone*" is also a grand sweeping claim - but I get your point.

If you define progress as the direction in which the majority moves, then progress is often wrong.


Marko isn't saying "I know my use-case is atypical, but I inherited a
code base where the bytes/pseudo-text duality of Python2 strings was
helpful to me, and Python3's strict division into byte strings and text
strings is less useful."

Rather, he is making the sweeping generalisation that having a text
string type *at all* is a mistake, because the Python 2 dual bytes+pseudo
text approach is superior, *for everyone*.


I do agree that it was a step in the wrong direction, but I also realize that it works sufficiently for many use cases (not all).

The smart thing would be for a language to have a switch of some sort to
turn on/off all I18N features.
The Python language has no builtin I18N features.


I don't want to argue over the definition of I18N.

Unicode is an attempt to solve at least one I18N issue - therefore Python *does* have builtin (and unavoidable) I18N features.

-Jim


--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to