On Mon, Oct 31, 2016 at 12:02:54AM +1000, Nick Coghlan wrote:

> What this means is that there aren't likely to be many practical gains
> in using the "right" symbol for something, even when it's already
> defined in Unicode, as we expect the number of people learning that
> symbology *before* learning Python to be dramatically smaller than the
> proportion learning Python first and the formal mathematical symbols
> later (if they learn them at all).

Depends on the symbol. Most people do study maths in secondary school 
where they will be introduced to symbols beyond the ASCII + - * / etc, 
for instance set union and intersection ∪ ∩, although your point 
certainly applies to some of the more exotic (even for mathematicians) 
symbols in Unicode.


> This means that instead of placing more stringent requirements on
> editing environments for Python source code in order to use non-ASCII
> input symbols, we're still far more likely to look to define a
> suitable keyword, or assign a relatively arbitrary meaning to an ASCII
> punctuation symbol

Indeed. But there's only so many ASCII punctuation marks, and digraphs 
and trigraphs can become tiresome. And once people have solved the 
keyboard entry issue, it is no harder to memorise the "correct" symbol 
than some arbitrary ASCII sequence.


> (and that's assuming we accept that a proposal will
> see sufficient use to be worthy of new syntax in the first place,
> which is far from being a given).

I see that Perl is leading the way here, supporting a large number of 
Unicode symbols:

https://docs.perl6.org/language/unicode_entry.html

I must say that it is kinda cute that Perl6 does the right thing for x².



-- 
Steve
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to