Andrei Alexandrescu wrote:
Spacen Jasset wrote:
Bill Baxter wrote:
On Thu, Oct 23, 2008 at 7:27 AM, Andrei Alexandrescu
<[EMAIL PROTECTED]> wrote:
Please vote up before the haters take it down, and discuss:

http://www.reddit.com/r/programming/comments/78rjk/allowing_unicode_operators_in_d_similarly_to/


(My comment cross posted here from reddit)

I think the right way to do it is not to make everything Unicode. All
the pressure on the existing symbols would be dramatically relieved by
the addition of just a handful of new symbols.

The truth is keyboards aren't very good for inputting Unicode. That
isn't likely to change. Yes they've dealt with the problem in Asian
languages by using IMEs but in my opinion IMEs are horrible to use.

Some people seem to argue it's a waste to go to Unicode only for a few
symbols. If you're going to go Unicode, you should go whole hog. I'd
argue the exact opposite. If you're going to go Unicode, it should be
done in moderation. Use as little Unicode as necessary and no more.

As for how to input unicode -- Microsoft Word solved that problem ages
ago, assuming we're talking about small numbers of special characters.
It's called AutoCorrect. You just register your unicode symbol as a
misspelling for "(X)" or something unique like that and then every
time you type "(X)" a funky unicode character instantly replaces those
chars.

Yeh, not many editors support such a feature. But it's very easy to
implement. And with that one generic mechanism, your editor is ready
to support input of Unicode chars in any language just by adding the
right definitions.

--bb
I am not entirely sure that 30 or (x amount) of new operators would be a good thing anyway. How hard is it to say m3 = m1.crossProduct(m2) ? vs m3 = m1 X m2 ? and how often will that happen? It's also going to make the language more difficult to learn and understand.

I have noticed that in pretty much all scientific code, the f(a, b) and a.f(b) notations fall off a readability cliff when the number of operators grows only to a handful. Lured by simple examples like yours, people don't see that as a problem until they actually have to read or write such code. Adding temporaries and such is not that great because it further takes the algorithm away from its mathematical form just for serving a notation that was the problem in the first place.

Yes, that is indeed a fair point and I agree. D is a "systems programming language." [sic] though; and so what will people use it for in the main? I suggest that communities that require scientific code have options now, and that they can and do choose languages for the purpose which have better support for thier needs than D might achieve.


If set memebrship test operator and a few others are introduced, then really to be "complete" all the set operators must be added, and implemented.

Futhermore, the introduction of set operators should really mean that you can use them on something by default, that means implementing sets that presumably are usable, quick, and are worth using, otherwise peope will roll thier own (all the time) in many different ways.

Unicode symbol 'x' may look better, but is it really more readable? I think it is -- a bit, and it may be cool, but I don't think it's one of the things that is going to make developing software siginficantly easier.

I think "cool" has not a lot to do with it. For scientific code, it's closer to a necessity.
On my use of "cool" I only brought it up as this thread has a few mentions of the word and it's a bit nebulous. I, personally, am more concerened with practicality than "cool".



Andrei

What I think of unicode symbols therefore depends on whether D should be more scientific oriented or not. If it should be, then unicode symbols would undoubtedly be a benefit. My responses were guided by the assumption that D was more generic in nature, though.

Reply via email to