Re: Problems of Symbol Congestion in Computer Languages

2011-03-14 Thread Jason Swails
 The metric system is defined to such a ridiculous level of precision
 because we have the technology, and the need, to measure things to that
 level of precision. Standards need to be based on something which is
 universal and unchanging.


Other systems of measure (for instance, atomic units and the light-year) are
based on physical constants instead of the size of a stick in France.  As
long as these don't change relative to one another, these approaches are
formally equivalent.  We can't be sure that's true, though.  If that's the
case, then we can't possibly keep an unchanging system of measurement.

Not to disagree with Steven, as these arguments are irrelevant in (almost
all) current scientific research; just to pose thoughts.

Food for thought,
Jason
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-13 Thread Robert Maas, http://tinyurl.com/uh3t
 From: rantingrick rantingr...@gmail.com
 Anyone with half a brain understands the metric system is far
 superior (on many levels) then any of the other units of
 measurement.

Anyone with a *whole* brain can see that you are mistaken. The
current metric system has two serious flaws:

It's based on powers of ten rather than powers of two, creating a
disconnect between our communication with computers (in decimal)
and how computers deal with numbers internally (in binary). Hence
the confusion newbies have as to why if you type into the REP loop
 (+ 1.1 2.2 3.3)
you get out
 6.604

The fundamental units are absurd national history artifacts such as
the French metre stick when maintained at a particular
temperature, and the Grenwich Observatory second as 1/(24*60*60)
of the time it took the Earth to rotate once relative to a
line-of-sight to the Sun under some circumstance long ago.

And now these have been more precisely defined as *exactly* some
inscrutable multiples of the wavelength and time-period of some
particular emission from some particular isotope under certain
particular conditions:
http://en.wikipedia.org/wiki/Metre#Standard_wavelength_of_krypton-86_emission
 (that direct definition replaced by the following:)
http://en.wikipedia.org/wiki/Metre#Speed_of_light
The metre is the length of the path travelled by light in vacuum
 during a time interval of ^1/[299,792,458] of a second.
http://en.wikipedia.org/wiki/Second#Modern_measurements
the duration of 9,192,631,770 periods of the radiation corresponding to
 the transition between the two hyperfine levels of the ground state of
 the caesium-133 atom
Exercise to the reader: Combine those nine-decimal-digit and
ten-decimal-digit numbers appropriately to express exactly how many
wavelengths of the hyperfine transition equals one meter.
Hint: You either multiply or divide, hence if you just guess you
have one chance out of 3 of being correct.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-13 Thread Steven D'Aprano
On Sun, 13 Mar 2011 00:52:24 -0800, Robert Maas, http://tinyurl.com/uh3t
wrote:

 Exercise to the reader: Combine those nine-decimal-digit and
 ten-decimal-digit numbers appropriately to express exactly how many
 wavelengths of the hyperfine transition equals one meter. Hint: You
 either multiply or divide, hence if you just guess you have one chance
 out of 3 of being correct.


Neither. The question is nonsense. The hyperfine transition doesn't have 
a wavelength. It is the radiation emitted that has a wavelength. To work 
out the wavelength of the radiation doesn't require guessing, and it's 
not that complicated, it needs nothing more than basic maths.

Speed of light = 1 metre travelled in 1/299792458 of a second
If 9192631770 periods of the radiation takes 1 second, 1 period takes 
1/9192631770 of a second.

Combine that with the formula for wavelength:
Wavelength = speed of light * period
  = 299792458 m/s * 1/9192631770 s
  = 0.03261225571749406 metre


Your rant against the metric system is entertaining but silly. Any 
measuring system requires exact definitions of units, otherwise people 
will disagree on how many units a particular thing is. The imperial 
system is a good example of this: when you say something is 15 miles, 
do you mean UK statute miles, US miles, survey miles, international 
miles, nautical miles, or something else? The US and the UK agree that a 
mile is exactly 1,760 yards, but disagree on the size of a yard. And 
let's not get started on fluid ounces (a measurement of volume!) or 
gallons...

The metric system is defined to such a ridiculous level of precision 
because we have the technology, and the need, to measure things to that 
level of precision. Standards need to be based on something which is 
universal and unchanging. Anybody anywhere in the world can (in 
principle) determine their own standard one metre rule, or one second 
timepiece, without arguments about which Roman soldier's paces defines a 
yard, or which king's forearm is a cubit.


Follow-ups set to comp.lang.python.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-05 Thread Xah Lee
On Mar 1, 3:40 pm, Chris Jones cjns1...@gmail.com wrote:

 At first it looks like something MS (Morgan Stanley..) dumped into the
 OSS lap fifteen years ago and nobody ever used it or maintained it.. so
 it takes a bit of digging to make it.. sort of work in current GNU/linux
 distributions.. especially since it knows nothing about Unicode.

 Here's the X/A+ map I came up with:

 // A+ keyboard layout: /usr/share/X11/xkb/symbols/apl
 // Chris Jones - 18/12/2010

 // Enable via:
 //      $ setxkbmap -v 10 apl

 default
 partial alphanumeric_keys modifier_keys
 xkb_symbols APL {

     name[Group1]= APL;

     // Alphanumeric section
     key TLDE {  [     grave,    asciitilde,     0x01fe,     0x017e  
     ]       };
     key AE01 {  [         1,    exclam,         0x01a1,     0x01e0  
     ]       };
     key AE02 {  [         2,    at,             0x01a2,     0x01e6  
     ]       };
     key AE03 {  [         3,    numbersign,     0x013c,     0x01e7  
     ]       };
     key AE04 {  [         4,    dollar,         0x01a4,     0x01e8  
     ]       };
     key AE05 {  [         5,    percent,        0x013d,     0x01f7  
     ]       };
     key AE06 {  [         6,    asciicircum,    0x01a6,     0x01f4  
     ]       };
     key AE07 {  [         7,    ampersand,      0x013e,     0x01e1  
     ]       };
     key AE08 {  [         8,    asterisk,       0x01a8,     0x01f0  
     ]       };
     key AE09 {  [         9,    parenleft,      0x01a9,     0x01b9  
     ]       };
     key AE10 {  [         0,    parenright,     0x015e,     0x01b0  
     ]       };
     key AE11 {  [     minus,    underscore,     0x01ab,     0x0121  
     ]       };
     key AE12 {  [     equal,    plus,           0x01df,     0x01ad  
     ]       };

     key AD01 {  [         q,    Q,              0x013f,     0x01bf  
     ]       };
     key AD02 {  [         w,    W,              0x01d7,     Nosymbol    
     ]       };
     key AD03 {  [         e,    E,              0x01c5,     0x01e5  
     ]       };
     key AD04 {  [         r,    R,              0x01d2,     Nosymbol    
     ]       };
     key AD05 {  [         t,    T,              0x017e,     Nosymbol    
     ]       };
     key AD06 {  [         y,    Y,              0x01d9,     0x01b4  
     ]       };
     key AD07 {  [         u,    U,              0x01d5,     Nosymbol    
     ]       };
     key AD08 {  [         i,    I,              0x01c9,     0x01e9  
     ]       };
     key AD09 {  [         o,    O,              0x01cf,     0x01ef  
     ]       };
     key AD10 {  [         p,    P,              0x012a,     0x01b3  
     ]       };
     key AD11 {  [ bracketleft,  braceleft,      0x01fb,     0x01dd  
     ]       };
     key AD12 {  [ bracketright, braceright,     0x01fd,     0x01db  
     ]       };

     key AC01 {  [         a,    A,              0x01c1,     Nosymbol    
     ]       };
     key AC02 {  [         s,    S,              0x01d3,     0x01be  
     ]       };
     key AC03 {  [         d,    D,              0x01c4,     Nosymbol    
     ]       };
     key AC04 {  [         f,    F,              0x015f,     0x01bd  
     ]       };
     key AC05 {  [         g,    G,              0x01c7,     0x01e7  
     ]       };
     key AC06 {  [         h,    H,              0x01c8,     0x01e8  
     ]       };
     key AC07 {  [         j,    J,              0x01ca,     0x01ea  
     ]       };
     key AC08 {  [         k,    K,              0x0127,     Nosymbol    
     ]       };
     key AC09 {  [         l,    L,              0x01cc,     0x01ec  
     ]       };
     key AC10 {  [ semicolon,    colon,          0x01db,     0x01bc  
     ]       };
     key AC11 {  [ apostrophe,   quotedbl,       0x01dd,     0x01bb  
     ]       };

     key AB01 {  [         z,    Z,              0x01da,     0x01fa  
     ]       };
     key AB02 {  [         x,    X,              0x01d8,     Nosymbol    
     ]       };
     key AB03 {  [         c,    C,              0x01c3,     0x01e3  
     ]       };
     key AB04 {  [         v,    V,              0x01d6,     Nosymbol    
     ]       };
     key AB05 {  [         b,    B,              0x01c2,     0x01e2  
     ]       };
     key AB06 {  [         n,    N,              0x01ce,     0x01ee  
     ]       };
     key AB07 {  [         m,    M,              0x017c,     0x01cd  
     ]       };
     key AB08 {  [     comma,    less,           0x01ac,     0x013c  
     ]       };
     key AB09 {  [    period,    greater,        0x01dc,     0x01ae  
     ]       };
     key AB10 {  [     slash,    question,       0x01af,     0x013f  
     ]       };

     key BKSL {  [ backslash,   

Re: Problems of Symbol Congestion in Computer Languages

2011-03-05 Thread Albert van der Horst
In article mailman.487.1298918378.1189.python-l...@python.org,
Dotan Cohen  dotanco...@gmail.com wrote:
You miss the canonical bad character reuse case: = vs ==.

Had there been more meta keys, it might be nice to have a symbol for
each key on the keyboard. I personally have experimented with putting
the symbols as regular keys and the numbers as the Shifted versions.
It's great for programming.

People might be interested in the colorforth solution:

This goes the other way: characters are limited (lowercase
and few special chars) to what is needed for programming.
So the fingers never need to leave the home position,
reaching about 30 chars at most.
Different uses (defining a function versus using a function)
are indicated by color, so don't use up char's.

http://www.colorforth.com

I was forced to use it (a development environment required it)
and it is not as bad as it sounds.

--
Dotan Cohen

Groetjes Albert

--
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-01 Thread Dotan Cohen
On Tue, Mar 1, 2011 at 05:30, rusi rustompm...@gmail.com wrote:
 Had there been more meta keys, it might be nice to have a symbol for
 each key on the keyboard. I personally have experimented with putting
 the symbols as regular keys and the numbers as the Shifted versions.
 It's great for programming.

 Hmmm... Clever!
 Is it X or Windows?
 Can I have your setup?


It's X, on Kubuntu. I've since destroyed that layout, but you can
easily play around in /usr/share/X11/xkb/symbols/us or whichever
layout you prefer. I am working on another one, though, actually I
just stared working on it yesterday. It's currently broken (I'm in the
middle of troubleshooting it) but you can see what I currently have
here:
http://dotancohen.com/eng/keyboard_layout.html

 One problem we programmers face is that keyboards were made for
 typists not programmers.

Yes, I'm trying to solve that! Ideally in the end all the brackets
including {} won't need modifier keys. Give me some feedback, please,
on that layout.


-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-01 Thread Dotan Cohen
On Tue, Mar 1, 2011 at 07:04, Xah Lee xah...@gmail.com wrote:
 hi Russ,

 there's a programer's dvorak layout i think is bundled with linux.

 or you can do it with xmodmap on X-11 or AutoHotKey on Windows, or
 within emacs... On the mac, you can use keyboardMaestro, Quickeys, or
 just write a os wide config file yourself. You can see tutorials and
 sample files for all these here 
 http://xahlee.org/Periodic_dosage_dir/keyboarding.html

 i'd be interested to know what Dotan Cohen use too.


You can see what I started working on yesterday, but it's far from finished:
http://dotancohen.com/eng/keyboard_layout.html

I tried reaching you on Skype yesterday, Xah, but I think that you
blocked me suspecting that I may be a bot. Try to Skype-chat with me
at user dotancohen, I think that we can help each other.


 i tried the swapping number row with symbols a few years back. didn't
 like it so much because numbers are frequently used as well,
 especially when you need to enter a series of numbers. e.g. heavy
 math, or dates 2010-02-28. One can use the number pad but i use that
 as extra programable buttons.


I don't like the number pad so I'm looking for another solution. I
wired up a spring-off lightswitch to the Shift key and I operate it
with my foot. It's great but it only works when I'm home: it is too
ridiculous to take with me. I'm wiring up two more for Ctrl an Alt,
too bad it's too cumbersome to have ESC, Enter, and Backspace there as
well.

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-01 Thread Mark Thomas
I know someone who was involved in creating a language called A+. It
was invented at Morgan Stanley where they used Sun keyboards and had
access to many symbols, so the language did have set symbols, math
symbols, logic symbols etc. Here's a keyboard map including the
language's symbols (the red characters). http://www.aplusdev.org/keyboard.html

I have no idea if this language is still in use.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-01 Thread rusi
On Mar 1, 6:01 pm, Mark Thomas m...@thomaszone.com wrote:
 I know someone who was involved in creating a language called A+. It
 was invented at Morgan Stanley where they used Sun keyboards and had
 access to many symbols, so the language did have set symbols, math
 symbols, logic symbols etc. Here's a keyboard map including the
 language's symbols (the red characters).http://www.aplusdev.org/keyboard.html

 I have no idea if this language is still in use.

Runs (ok limps) under debian/ubuntu -- see 
http://packages.debian.org/squeeze/aplus-fsf

My own attempts at improving the scene 
http://www.emacswiki.org/emacs/AplInDebian

If anyone has any further findings on this, I'd be happy to know.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-03-01 Thread Chris Jones
On Tue, Mar 01, 2011 at 09:46:19AM EST, rusi wrote:
 On Mar 1, 6:01 pm, Mark Thomas m...@thomaszone.com wrote:

  I know someone who was involved in creating a language called A+. It
  was invented at Morgan Stanley where they used Sun keyboards and had
  access to many symbols, so the language did have set symbols, math
  symbols, logic symbols etc. Here's a keyboard map including the
  language's symbols (the red characters).
  http://www.aplusdev.org/keyboard.html

  I have no idea if this language is still in use.
 
 Runs (ok limps) under debian/ubuntu -- see 
  http://packages.debian.org/squeeze/aplus-fsf

 My own attempts at improving the scene 
  http://www.emacswiki.org/emacs/AplInDebian

 If anyone has any further findings on this, I'd be happy to know.

Well.. a couple months back I got to the point where I'd really had it
with the anglo-centric verbosity of common programming languages (there
are days when even python makes me think of COBOL.. ugh..) and I took
a look at A+.

At first it looks like something MS (Morgan Stanley..) dumped into the
OSS lap fifteen years ago and nobody ever used it or maintained it.. so
it takes a bit of digging to make it.. sort of work in current GNU/linux
distributions.. especially since it knows nothing about Unicode.

Here's the X/A+ map I came up with:

// A+ keyboard layout: /usr/share/X11/xkb/symbols/apl
// Chris Jones - 18/12/2010

// Enable via:
//  $ setxkbmap -v 10 apl

default
partial alphanumeric_keys modifier_keys 
xkb_symbols APL {

name[Group1]= APL;

// Alphanumeric section
key TLDE {[ grave,asciitilde, 0x01fe, 
0x017e  ]   };
key AE01 {[ 1,exclam, 0x01a1, 
0x01e0  ]   };
key AE02 {[ 2,at, 0x01a2, 
0x01e6  ]   };
key AE03 {[ 3,numbersign, 0x013c, 
0x01e7  ]   };
key AE04 {[ 4,dollar, 0x01a4, 
0x01e8  ]   };
key AE05 {[ 5,percent,0x013d, 
0x01f7  ]   };
key AE06 {[ 6,asciicircum,0x01a6, 
0x01f4  ]   };
key AE07 {[ 7,ampersand,  0x013e, 
0x01e1  ]   };
key AE08 {[ 8,asterisk,   0x01a8, 
0x01f0  ]   };
key AE09 {[ 9,parenleft,  0x01a9, 
0x01b9  ]   };
key AE10 {[ 0,parenright, 0x015e, 
0x01b0  ]   };
key AE11 {[ minus,underscore, 0x01ab, 
0x0121  ]   };
key AE12 {[ equal,plus,   0x01df, 
0x01ad  ]   };

key AD01 {[ q,Q,  0x013f, 
0x01bf  ]   };
key AD02 {[ w,W,  0x01d7, 
Nosymbol]   };
key AD03 {[ e,E,  0x01c5, 
0x01e5  ]   };
key AD04 {[ r,R,  0x01d2, 
Nosymbol]   };
key AD05 {[ t,T,  0x017e, 
Nosymbol]   };
key AD06 {[ y,Y,  0x01d9, 
0x01b4  ]   };
key AD07 {[ u,U,  0x01d5, 
Nosymbol]   };
key AD08 {[ i,I,  0x01c9, 
0x01e9  ]   };
key AD09 {[ o,O,  0x01cf, 
0x01ef  ]   };
key AD10 {[ p,P,  0x012a, 
0x01b3  ]   };
key AD11 {[ bracketleft,  braceleft,  0x01fb, 
0x01dd  ]   };
key AD12 {[ bracketright, braceright, 0x01fd, 
0x01db  ]   };

key AC01 {[ a,A,  0x01c1, 
Nosymbol]   };
key AC02 {[ s,S,  0x01d3, 
0x01be  ]   };
key AC03 {[ d,D,  0x01c4, 
Nosymbol]   };
key AC04 {[ f,F,  0x015f, 
0x01bd  ]   };
key AC05 {[ g,G,  0x01c7, 
0x01e7  ]   };
key AC06 {[ h,H,  0x01c8, 
0x01e8  ]   };
key AC07 {[ j,J,  0x01ca, 
0x01ea  ]   };
key AC08 {[ k,K,  0x0127, 
Nosymbol]   };
key AC09 {[ l,L,  0x01cc, 
0x01ec  ]   };
key AC10 {[ semicolon,colon,  0x01db, 
0x01bc  ]   

Re: Problems of Symbol Congestion in Computer Languages

2011-02-28 Thread rusi
On Feb 17, 3:07 am, Xah Lee xah...@gmail.com wrote:
 might be interesting.

 〈Problems of Symbol Congestion in Computer Languages (ASCII Jam;
 Unicode; Fortress)〉http://xahlee.org/comp/comp_lang_unicode.html

Haskell is slowly moving this way see for example
http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#unicode-syntax

But its not so easy the lambda does not work straight off -- see
http://hackage.haskell.org/trac/ghc/ticket/1102
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-28 Thread Dotan Cohen
You miss the canonical bad character reuse case: = vs ==.

Had there been more meta keys, it might be nice to have a symbol for
each key on the keyboard. I personally have experimented with putting
the symbols as regular keys and the numbers as the Shifted versions.
It's great for programming.


-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-28 Thread rusi
On Feb 28, 11:39 pm, Dotan Cohen dotanco...@gmail.com wrote:
 You miss the canonical bad character reuse case: = vs ==.

 Had there been more meta keys, it might be nice to have a symbol for
 each key on the keyboard. I personally have experimented with putting
 the symbols as regular keys and the numbers as the Shifted versions.
 It's great for programming.

Hmmm... Clever!
Is it X or Windows?
Can I have your setup?

One problem we programmers face is that keyboards were made for
typists not programmers.
Another is that when we move from 'hi-level' questions eg code reuse
-- to lower and lower -- eg ergonomics of reading and writing code --
the focus goes from the center of consciousness to the periphery and
we miss how many inefficiencies there are in our semi-automatic
actions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-28 Thread Xah Lee
On Feb 28, 7:30 pm, rusi rustompm...@gmail.com wrote:
 On Feb 28, 11:39 pm, Dotan Cohen dotanco...@gmail.com wrote:

  You miss the canonical bad character reuse case: = vs ==.

  Had there been more meta keys, it might be nice to have a symbol for
  each key on the keyboard. I personally have experimented with putting
  the symbols as regular keys and the numbers as the Shifted versions.
  It's great for programming.

 Hmmm... Clever!
 Is it X or Windows?
 Can I have your setup?

hi Russ,

there's a programer's dvorak layout i think is bundled with linux.

or you can do it with xmodmap on X-11 or AutoHotKey on Windows, or
within emacs... On the mac, you can use keyboardMaestro, Quickeys, or
just write a os wide config file yourself. You can see tutorials and
sample files for all these here 
http://xahlee.org/Periodic_dosage_dir/keyboarding.html

i'd be interested to know what Dotan Cohen use too.

i tried the swapping number row with symbols a few years back. didn't
like it so much because numbers are frequently used as well,
especially when you need to enter a series of numbers. e.g. heavy
math, or dates 2010-02-28. One can use the number pad but i use that
as extra programable buttons.

 Xah

 One problem we programmers face is that keyboards were made for
 typists not programmers.
 Another is that when we move from 'hi-level' questions eg code reuse
 -- to lower and lower -- eg ergonomics of reading and writing code --
 the focus goes from the center of consciousness to the periphery and
 we miss how many inefficiencies there are in our semi-automatic
 actions.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread Ian

On 18/02/2011 07:50, Chris Jones wrote:

  Always
struck me as odd that a country like Japan for instance, with all its
achievements in the industrial realm, never came up with one single
major piece of software.

I think there are two reasons for this.

1) Written Japanese is so hard that the effective illiteracy rate in 
Japan is astonishingly high.


Both UK and Japan claim 99% literacy rate, but UK has 10-20% who find 
reading so hard they
don't read. In Japan by some estimates the proportion is about half. 
Figures from memory.


Result - too many users struggle to read instructions and titles on 
screen. Help texts? Forgetaboutit.


2) Culture. In the West, a designer will decide the architecture of a 
major system, and it is a basis
for debate and progress. If he gets it wrong, it is not a personal 
disgrace or career limiting.  If it is
nearly right, then that is a major success. In Japan, the architecture 
has to be a debated and agreed.
This takes ages, costs lots, and ultimately fails.  The failure is 
because architecture is always a trade off -

there is no perfect answer.

I was involved in one software project where a major Japanese company 
had done the the feasibility
study. It was much much too expensive. The UK company I worked for was 
able, not only to win the bid,
but complete the job profitably for less than the Japanese giant had 
charged for the feasibility study.
We ignored their study - we did not have time to read through the 
documentation which took

10 foot of shelf space to house.

Ian

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread Tim Wintle
On Fri, 2011-02-18 at 12:27 +, Ian wrote:
 2) Culture. In the West, a designer will decide the architecture of a 
 major system, and it is a basis
 for debate and progress. If he gets it wrong, it is not a personal 
 disgrace or career limiting.  If it is
 nearly right, then that is a major success. In Japan, the architecture 
 has to be a debated and agreed.
 This takes ages, costs lots, and ultimately fails.  The failure is 
 because architecture is always a trade off -
 there is no perfect answer.

I find this really interesting - we spend quite a lot of time studying
the Toyota production system and seeing how we can do programming work
in a similar way, and it's worked fairly well for us (Kanban, Genchi
Genbutsu, eliminating Muda  Mura, etc).

I would have expected Japanese software to have worked quite smoothly,
with continuous improvement taking in everybody's opinions etc -
although I suppose that if production never starts because the
improvements are done to a spec, rather than the product, it would be a
massive hindrance.

Tim Wintle

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread Westley Martínez
On Mon, 2011-02-21 at 01:08 +, BartC wrote:
 
 WestleyMartínez aniko...@gmail.com wrote in message 
 news:mailman.202.1298081685.1189.python-l...@python.org...
 
 
  You have provided me with some well thought out arguments and have
  stimulated my young programmer's mind, but I think we're coming from
  different angles. You seem to come from a more math-minded, idealist
  angle, while I come from a more practical angle. Being a person who has
  had to deal with the í in my last name
 
 What purpose does the í serve in your last name, and how is it different 
 from i?
 
 (I'd have guessed it indicated stress, but it looks Spanish and I thought 
 that syllable was stressed anyway.)
 
 -- 
 Bartc
  
 
I don't know. I don't speak Spanish, but to my knowledge it's not a
critical diacritic like in some languages.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread rantingrick
On Feb 20, 7:08 pm, BartC b...@freeuk.com wrote:
 WestleyMartínez aniko...@gmail.com wrote in message

 news:mailman.202.1298081685.1189.python-l...@python.org...

  You have provided me with some well thought out arguments and have
  stimulated my young programmer's mind, but I think we're coming from
  different angles. You seem to come from a more math-minded, idealist
  angle, while I come from a more practical angle. Being a person who has
  had to deal with the í in my last name

 What purpose does the í serve in your last name, and how is it different
 from i?

Simple, it does not have a purpose. Well, that is,  except to give the
*impression* that a few pseudo intellectuals have a reason to keep
their worthless tenure at universities world wide. It's window
dressing, and nothing more.

 (I'd have guessed it indicated stress, but it looks Spanish and I thought
 that syllable was stressed anyway.)

The ascii char i would suffice. However some languages fell it
necessary to create an ongoing tutorial of the language. Sure French
and Latin can sound pretty, however if all you seek is pretty
music then listen to music. Language should be for communication and
nothing more.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread Westley Martínez
On Mon, 2011-02-21 at 11:28 -0800, rantingrick wrote:
 On Feb 20, 7:08 pm, BartC b...@freeuk.com wrote:
  WestleyMartínez aniko...@gmail.com wrote in message
 
  news:mailman.202.1298081685.1189.python-l...@python.org...
 
   You have provided me with some well thought out arguments and have
   stimulated my young programmer's mind, but I think we're coming from
   different angles. You seem to come from a more math-minded, idealist
   angle, while I come from a more practical angle. Being a person who has
   had to deal with the í in my last name
 
  What purpose does the í serve in your last name, and how is it different
  from i?
 
 Simple, it does not have a purpose. Well, that is,  except to give the
 *impression* that a few pseudo intellectuals have a reason to keep
 their worthless tenure at universities world wide. It's window
 dressing, and nothing more.
 
  (I'd have guessed it indicated stress, but it looks Spanish and I thought
  that syllable was stressed anyway.)
 
 The ascii char i would suffice. However some languages fell it
 necessary to create an ongoing tutorial of the language. Sure French
 and Latin can sound pretty, however if all you seek is pretty
 music then listen to music. Language should be for communication and
 nothing more.
Nicely said; you're absolutely right.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread Alexander Kapps

On 22.02.2011 00:34, Westley Martínez wrote:

On Mon, 2011-02-21 at 11:28 -0800, rantingrick wrote:



The ascii char i would suffice. However some languages fell it
necessary to create an ongoing tutorial of the language. Sure French
and Latin can sound pretty, however if all you seek is pretty
music then listen to music. Language should be for communication and
nothing more.

Nicely said; you're absolutely right.


http://en.wikipedia.org/wiki/Newspeak


(Babbling Rick is just an Orwellian Nightmare, try to ignore him)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-21 Thread Westley Martínez
On Tue, 2011-02-22 at 00:48 +0100, Alexander Kapps wrote:
 On 22.02.2011 00:34, Westley Martínez wrote:
  On Mon, 2011-02-21 at 11:28 -0800, rantingrick wrote:
 
  The ascii char i would suffice. However some languages fell it
  necessary to create an ongoing tutorial of the language. Sure French
  and Latin can sound pretty, however if all you seek is pretty
  music then listen to music. Language should be for communication and
  nothing more.
  Nicely said; you're absolutely right.
 
 http://en.wikipedia.org/wiki/Newspeak
 
 
 (Babbling Rick is just an Orwellian Nightmare, try to ignore him)
 
I don't quite get what you mean, but I'm just loving the troll.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-20 Thread BartC



WestleyMartínez aniko...@gmail.com wrote in message 
news:mailman.202.1298081685.1189.python-l...@python.org...




You have provided me with some well thought out arguments and have
stimulated my young programmer's mind, but I think we're coming from
different angles. You seem to come from a more math-minded, idealist
angle, while I come from a more practical angle. Being a person who has
had to deal with the í in my last name


What purpose does the í serve in your last name, and how is it different 
from i?


(I'd have guessed it indicated stress, but it looks Spanish and I thought 
that syllable was stressed anyway.)


--
Bartc


--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-20 Thread alex23
rantingrick rantingr...@gmail.com wrote:
 You lack vision.

And you lack education.

 Evolution is the pursuit of perfection at the expense of anything and
 everything!

Evolution is the process by which organisms change over time through
genetically shared traits. There is no 'perfection', there is only
'fitness', that is, survival long enough to reproduce. Fitness is not
something any of your ideas possess.

The rest of your conjecture about my opinions and beliefs is just pure
garbage. You'd get far fewer accusations of being a troll if you
stopped putting words into other peoples mouths; then we'd just think
you're exuberantly crazy.

Also, Enough! With! The! Hyperbole! Already! Visionary is _never_ a
self-appointed title.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-20 Thread Steven D'Aprano
On Sun, 20 Feb 2011 20:52:36 -0800, alex23 wrote:

 Also, Enough! With! The! Hyperbole! Already! Visionary is _never_ a
 self-appointed title.

You only say that because you lack the vision to see just how visionary 
rantingrick's vision is1!11!



Followups set to c.l.p.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-19 Thread Nicholas Devenish

On 19/02/2011 07:41, Westley Martínez wrote:

Simply remove 'dvorak-' to get qwerty. It allows you to use the right
Alt key as AltGr. For example:
AltGr+' i = í
AltGr+c = ç
AltGr+s = ß

I don't work on Windows or Mac enough to have figured out how to do on
those platforms, but I'm sure there's a simple way.
Again, it's the funky symbols that would be difficult to input.


On mac, the acute accent is Alt-e + vowel, so Alt-e i.

This seems to work universally, regardless of gui application or 
terminal. I don't work with X applications enough to know if they work 
there, however.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-19 Thread Nicholas Devenish

On 18/02/2011 10:26, Steven D'Aprano wrote:


Agreed. I'd like Python to support proper mathematical symbols like ∞ for
float('inf'), ≠ for not-equal, ≤ for greater-than-or-equal, and ≥ for
less-than-or-equal.



This would be joyful! At least with the subset of operations that 
already exist/exist as operators, the possibility of using these 
wouldn't affect anyone not using them (like the set/intersection 
notation mentioned in another post).


I'm not very optimistic about anything like this ever being accepted 
into python main, however (I can't imagine it being terribly complicated 
to add to the accepted language, though).

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steven D'Aprano
On Thu, 17 Feb 2011 21:43:37 -0800, John Nagle wrote:

  I've used both the MIT Space Cadet keyboard on a Symbolics LISP
 machine, and the Stanford SAIL keyboard. There's something to be said
 for having more mathematical symbols.

Agreed. I'd like Python to support proper mathematical symbols like ∞ for 
float('inf'), ≠ for not-equal, ≤ for greater-than-or-equal, and ≥ for 
less-than-or-equal. 

They would have to be optional, because most editors still make it 
difficult to enter such characters, and many fonts are lacking in glyphs, 
but still, now that Python supports non-ASCII encodings in source files, 
it could be done.

  Some programs use a bigger character set.  MathCAD,
 for example, has a broader range of mathematical symbols on the input
 side than ASCII offers.  They're not decorative; MathCAD has different
 = symbols for assignment, algebraic equivalence, identity, and
 comparison.

  I've previously mentioned that Python suffers in a few places
 from unwanted overloading.  Using + for concatenation of strings, then
 extending that to vectors, resulted in undesirable semantics.  + on
 arrays from numpy, and on built-in vectors behave quite differently. 
 A dedicated concatenation operator would have avoided that mess.

I don't quite agree that the mess is as large as you make out, but yes, 
more operators would be useful.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steven D'Aprano
On Fri, 18 Feb 2011 02:50:11 -0500, Chris Jones wrote:

 Always
 struck me as odd that a country like Japan for instance, with all its
 achievements in the industrial realm, never came up with one single
 major piece of software.

I think you are badly misinformed.

The most widespread operating system in the world is not Windows. It's 
something you've probably never heard of, from Japan, called TRON.

http://www.linuxinsider.com/story/31855.html
http://web-japan.org/trends/science/sci030522.html

Japan had an ambitious, but sadly failed, Fifth Generation Computing 
project:

http://en.wikipedia.org/wiki/Fifth_generation_computer 
http://vanemden.wordpress.com/2010/08/21/who-killed-prolog/

They did good work, but unfortunately were ahead of their time and the 
project ended in failure.

Japan virtually *owns* the video game market. Yes, yes, Americans publish 
a few high-profile first-person shooters. For every one of them, there's 
about a thousand Japanese games that never leave the country.

There's no shortages of programming languages which have come out of 
Japan:

http://hopl.murdoch.edu.au/findlanguages.prx?id=jpwhich=ByCountry
http://no-sword.jp/blog/2006/12/programming-without-ascii.html

The one you're most likely to have used or at least know of is Ruby.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Giacomo Boffi
Chris Jones cjns1...@gmail.com writes:

 [...] most any software of note appears to have come out of cultures
 where English is either the native language, or where the native
 language is either relatively close to English...

i do acknowledge your most, but how do you spell Moon in Portuguese?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Giacomo Boffi
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes:

 A dedicated concatenation operator would have avoided that mess.

 I don't quite agree that the mess is as large as you make out, but yes, 
 more operators would be useful.

am i wrong, or | is still available?
-- 
l'amore e' un sentimento a senso unico. a volte una via comincia dove
finisce un'altra e viceversa   -- Caldana, in IFQ
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Xah Lee
On 2011-02-16, Xah Lee  wrote:
│ Vast majority of computer languages use ASCII as its character set.
│ This means, it jams multitude of operators into about 20 symbols.
│ Often, a symbol has multiple meanings depending on contex.

On 2011-02-17, rantingrick wrote:
…

On 2011-02-17, Cthun wrote:
│ And you omitted the #1 most serious objection to Xah's proposal,
│ rantingrick, which is that to implement it would require unrealistic
│ things such as replacing every 101-key keyboard with 10001-key
keyboards
│ and training everyone to use them. Xah would have us all replace our
│ workstations with machines that resemble pipe organs, rantingrick,
or
│ perhaps the cockpits of the three surviving Space Shuttles. No doubt
│ they'd be enormously expensive, as well as much more difficult to
learn
│ to use, rantingrick.

keyboard shouldn't be a problem.

Look at APL users.
http://en.wikipedia.org/wiki/APL_(programming_language)
they are happy campers.

Look at Mathematica, which support a lot math symbols since v3 (~1997)
before unicode became popular.
see:
〈How Mathematica does Unicode?〉
http://xahlee.org/math/mathematica_unicode.html

word processors, also automatically do symbols such as “curly quotes”,
trade mark sign ™, copyright sing ©, arrow →, bullet •, ellipsis …
etc, and the number of people who produce document with these chars
are probably more than the number of programers.

in emacs, i recently also wrote a mode that lets you easily input few
hundred unicode chars.
〈Emacs Math Symbols Input Mode (xmsi-mode)〉
http://xahlee.org/emacs/xmsi-math-symbols-input.html

the essence is that you just need a input system.

look at Chinese, Japanese, Korean, or Islamic. They happily type
without requiring that every symbol they use must have a corresponding
key on keyboard. Some lang, such as Chinese, that's impossible or
impractical.

when a input system is well designd, it could be actually more
efficient than
keyboard combinations to typo special symbols (such as in Mac OS X's
opt key, or
Windows's AltGraph). Because a input system can be context based, that
it looks
at adjacent text to guess what you want.

for example, when you type = in python, the text editor can
automatically change it to ≥ (when it detects that it's appropriate,
e.g. there's a “if” nearby)

Chinese phonetic input system use this
extensively. Abbrev system in word processors and emacs is also a form
of
this. I wrote some thought about this here:

〈Designing a Math Symbols Input System〉
http://xahlee.org/comp/design_math_symbol_input.html

Xah Lee
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Xah Lee

Chris Jones wrote:
«.. from a quite different perspective it may be worth noting that
practically all programming languages (not to mention the attached
documentation) are based on the English language. And interestingly
enough, most any software of note appears to have come out of cultures
where English is either the native language, or where the native
language is either relatively close to English.. Northern Europe
mostly.. and not to some small extent, countries where English is well-
established as a universal second language, such as India. Always
struck me as odd that a country like Japan for instance, with all its
achievements in the industrial realm, never came up with one single
major piece of software.»

btw, english is one of the two of India's official lang. It's used
between Indians, and i think it's rare or non-existent for a college
in india that uses local dialect. (this is second hand knowledeg. I
learned this in Wikipedia and experience with indian co-workers)

i also wondered about why japan doesn't seems to have created major
software or OS. Though, Ruby is invented in Japan. I do think they
have some OSes just not that popular... i think for special purposes
OSes, they have quite a lot ... from Mitsubishi, NEC, etc... in their
huge robotics industry among others. (again, this is all second hand
knowledge)

... i recall having read non-english comp lang that appeared
recently...

Xah Lee
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steve Schafer
On Thu, 17 Feb 2011 20:47:57 -0800 (PST), rantingrick
rantingr...@gmail.com wrote:

What is evolution?

Evolution is the pursuit of perfection at the expense of anything and
everything!

No, evolution is the pursuit of something just barely better than what
the other guy has. Evolution is about gaining an edge, not gaining
perfection.

Perfect is the enemy of good.

-Steve Schafer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread rantingrick
On Feb 18, 7:55 am, Steve Schafer st...@fenestra.com wrote:
 On Thu, 17 Feb 2011 20:47:57 -0800 (PST), rantingrick

 rantingr...@gmail.com wrote:
 What is evolution?

 Evolution is the pursuit of perfection at the expense of anything and
 everything!

 No, evolution is the pursuit of something just barely better than what
 the other guy has.

You fail to see from a larger perspective. You still see through the
eyes of part and not a whole. Each cog that is part of a machine is
just a means to an end. You are attempting to elevate one cog above
all other cogs, heck, you want to elevate one cog above the machine.
You are nothing, and until you accept your nothingness, only then will
you understand your place in the greater game.

 Evolution is about gaining an edge, not gaining
 perfection.

Evolution is about one cog gaining an edge over another, yes. However
the system itself moves toward perfection at the expense of any and
all cogs.

 Perfect is the enemy of good.

No. Perfect is the enemy of YOU. You are not able to be perfect
therefor you claim that perfection is evil to justify your meaningless
existence. And who are YOU to weigh good an evil? What do YOU know
about the Universe that gives you these powers of judgment? Do you
think with your measly 60-100 years of lifetime you can contemplate
the meaning of good and evil in a Universe that has existed for eons?
Get REAL! We only see and know a small fraction of what the Universe
really is. A huge portion of the Universe cannot even be accounted
for. And you parade around like some all-knowing being with all the
answers and full of selfishness and vanity. Ha!

If perfection is evil then what is the pursuit of perfection: AKA:
gaining an edge?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread rantingrick
On Feb 18, 8:22 am, rantingrick rantingr...@gmail.com wrote:

  Perfect is the enemy of good.

Do you think that just because something has a negative impact towards
you (or your existence) that the *something* is then evil? Take a
animal for instance: We kill animals for sustenance. The act of
killing the animal is negative from the viewpoint of the animal
however it does not make the killing evil.

Of course if the animal could speak i am sure it will tell you that
you are evil for ending its life. However the animal would be speaking
from a selfish viewpoint. The animal fails to see that the human is
more important than itself in the evolutionary chain. And by consuming
the flesh of the animal the human can continue to evolve more
knowledge. However the animal's life was not in vain for it's flesh
contributed to the life of an intelligent agent who then was able to
further the knowledge base of evolution far beyond what the animal
could have ever achieved!

Likewise *we* as intelligent agents are the tools of an intelligent
evolution. When the human becomes insufficient (from the rise of AI)
then the human will become the consumable. You are like the animal,
not understanding your place in the universe. You fail to see the
Universe from OUTSIDE your *own* selfish interpretation. You cannot
wield the meanings of good and evil from a selfish and naive
interpretation of the Universe. You must look much deeper.

You are not *something* unless you first realize you are *nothing*.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steve Schafer
On Fri, 18 Feb 2011 06:22:32 -0800 (PST), rantingrick
rantingr...@gmail.com wrote:

Evolution is about one cog gaining an edge over another, yes. However
the system itself moves toward perfection at the expense of any and
all cogs.

Um, do you actually know anything about (biological) evolution? There is
no evidence of an overall goal of any kind, perfect or not.

 * There are many examples of evolutionary arms races in nature; e.g.,
   the cheetah and the gazelle, each gaining incrementally on the other,
   and a thousand generations later, each in essentially the same place
   relative to the other that they started from, only with longer legs
   or a more supple spine.

 * There are many adaptations that confer a serious DISadvantage in one
   aspect of survivability, that survive because they confer an
   advantage in another (sickle-cell disease in humans, a peacock's
   tail, etc.).

If perfection is evil then what is the pursuit of perfection: AKA:
gaining an edge?

1) I never said that perfection is evil; those are entirely your words.

2) If you don't already see the obvious difference between pursuit of
perfection and gaining an edge, then I'm afraid there's nothing I can
do or say to help you.

-Steve Schafer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Stephen Hansen
On 2/17/11 7:42 PM, Littlefield, Tyler wrote:
My intention was to educate him on the pitfalls of multiplicity.
 O. that's what you call that long-winded nonsense? Education? You must
 live in America. Can I hazard a guess that your universal language might
 be english? Has it not ever occured to you that people take pride in
 their language? It is part of their culture. And yet you rant on about
 selfishness?

This is an old-rant, there's nothing new to it. Rick's racist and
imperialistic anti-Unicode rants have all been fully hashed out months
if not years ago, Tyler. There's really nothing more to say about it.

He doesn't get it.

-- 

   Stephen Hansen
   ... Also: Ixokai
   ... Mail: me+list/python (AT) ixokai (DOT) io
   ... Blog: http://meh.ixokai.io/



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Cthun

On 18/02/2011 7:43 AM, Xah Lee wrote:

On 2011-02-17, Cthun wrote:
│ And you omitted the #1 most serious objection to Xah's proposal,
│ rantingrick, which is that to implement it would require unrealistic
│ things such as replacing every 101-key keyboard with 10001-key
keyboards


What does your classic unsubstantiated and erroneous claim have to do 
with Lisp, Lee?

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Chris Jones
On Fri, Feb 18, 2011 at 06:40:17AM EST, Steven D'Aprano wrote:
 On Fri, 18 Feb 2011 02:50:11 -0500, Chris Jones wrote:

  Always struck me as odd that a country like Japan for instance, with
  all its achievements in the industrial realm, never came up with one
  single major piece of software.

 I think you are badly misinformed.
 
 The most widespread operating system in the world is not Windows. It's 
 something you've probably never heard of, from Japan, called TRON.
 
 http://www.linuxinsider.com/story/31855.html
 http://web-japan.org/trends/science/sci030522.html
 
 Japan had an ambitious, but sadly failed, Fifth Generation Computing 
 project:
 
 http://en.wikipedia.org/wiki/Fifth_generation_computer 
 http://vanemden.wordpress.com/2010/08/21/who-killed-prolog/
 
 They did good work, but unfortunately were ahead of their time and the 
 project ended in failure.
 
 Japan virtually *owns* the video game market. Yes, yes, Americans publish 
 a few high-profile first-person shooters. For every one of them, there's 
 about a thousand Japanese games that never leave the country.
 
 There's no shortages of programming languages which have come out of 
 Japan:
 
 http://hopl.murdoch.edu.au/findlanguages.prx?id=jpwhich=ByCountry
 http://no-sword.jp/blog/2006/12/programming-without-ascii.html
 
 The one you're most likely to have used or at least know of is Ruby.

Food for thought.. Thanks much for the links..!

cj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Westley Martínez
On Fri, 2011-02-18 at 04:43 -0800, Xah Lee wrote:
 On 2011-02-16, Xah Lee  wrote:
 │ Vast majority of computer languages use ASCII as its character set.
 │ This means, it jams multitude of operators into about 20 symbols.
 │ Often, a symbol has multiple meanings depending on contex.
 
 On 2011-02-17, rantingrick wrote:
 …
 
 On 2011-02-17, Cthun wrote:
 │ And you omitted the #1 most serious objection to Xah's proposal,
 │ rantingrick, which is that to implement it would require unrealistic
 │ things such as replacing every 101-key keyboard with 10001-key
 keyboards
 │ and training everyone to use them. Xah would have us all replace our
 │ workstations with machines that resemble pipe organs, rantingrick,
 or
 │ perhaps the cockpits of the three surviving Space Shuttles. No doubt
 │ they'd be enormously expensive, as well as much more difficult to
 learn
 │ to use, rantingrick.
 
 keyboard shouldn't be a problem.
 
 Look at APL users.
 http://en.wikipedia.org/wiki/APL_(programming_language)
 they are happy campers.
 
 Look at Mathematica, which support a lot math symbols since v3 (~1997)
 before unicode became popular.
 see:
 〈How Mathematica does Unicode?〉
 http://xahlee.org/math/mathematica_unicode.html
 
 word processors, also automatically do symbols such as “curly quotes”,
 trade mark sign ™, copyright sing ©, arrow →, bullet •, ellipsis …
 etc, and the number of people who produce document with these chars
 are probably more than the number of programers.
 
 in emacs, i recently also wrote a mode that lets you easily input few
 hundred unicode chars.
 〈Emacs Math Symbols Input Mode (xmsi-mode)〉
 http://xahlee.org/emacs/xmsi-math-symbols-input.html
 
 the essence is that you just need a input system.
 
 look at Chinese, Japanese, Korean, or Islamic. They happily type
 without requiring that every symbol they use must have a corresponding
 key on keyboard. Some lang, such as Chinese, that's impossible or
 impractical.
 
 when a input system is well designd, it could be actually more
 efficient than
 keyboard combinations to typo special symbols (such as in Mac OS X's
 opt key, or
 Windows's AltGraph). Because a input system can be context based, that
 it looks
 at adjacent text to guess what you want.
 
 for example, when you type = in python, the text editor can
 automatically change it to ≥ (when it detects that it's appropriate,
 e.g. there's a “if” nearby)
 
 Chinese phonetic input system use this
 extensively. Abbrev system in word processors and emacs is also a form
 of
 this. I wrote some thought about this here:
 
 〈Designing a Math Symbols Input System〉
 http://xahlee.org/comp/design_math_symbol_input.html
 
 Xah Lee
More people despise APL than like it.

Allowing non-ascii characters as operators is a silly idea simply
because if xorg breaks, which it's very likely to do with the current
video drivers, I'm screwed. Not only does the Linux virtual terminal not
support displaying these special characters, but there's also no way of
inputting them. On top of that, these special characters require more
bytes to display than ascii text, which would bloat source files
unnecessarily.

You say we have symbol congestion, but in reality we have our own symbol
bloat. Japanese has more or less than three punctuation marks, while
English has perhaps more than the alphabet! The fundamental point here
is using non-ascii operators violates the Zen of Python. It violates
Simple is better than complex, as well as There should be one-- and
preferably only one --obvious way to do it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steven D'Aprano
On Fri, 18 Feb 2011 04:43:13 -0800, Xah Lee wrote:

 for example, when you type = in python, the text editor can
 automatically change it to ≥ (when it detects that it's appropriate,
 e.g. there's a “if” nearby)

You can't rely on the presence of an `if`.

flag = x = y
value = lookup[x = y]
filter(lambda x, y: x = y, sequence)

Not that you need to. There are no circumstances in Python where the 
meaning of = is changed by an `if` statement.


Followups set to comp.lang.python.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steven D'Aprano
On Fri, 18 Feb 2011 11:16:30 -0800, Westley Martínez wrote:

 Allowing non-ascii characters as operators is a silly idea simply
 because if xorg breaks, which it's very likely to do with the current
 video drivers, I'm screwed. 

And if your hard drive crashes, you're screwed too. Why stop at xorg 
breaks?

Besides, Windows and MacOS users will be scratching their head asking 
xorg? Why should I care about xorg?

Programming languages are perfectly allowed to rely on the presence of a 
working environment. I don't think general purpose programming languages 
should be designed with reliability in the presence of a broken 
environment in mind.

Given the use-cases people put Python to, it is important for the 
language to *run* without a GUI environment. It's also important (but 
less so) to allow people to read and/or write source code without a GUI, 
which means continuing to support the pure-ASCII syntax that Python 
already supports. But Python already supports non-ASCII source files, 
with an optional encoding line in the first two lines of the file, so it 
is already possible to write Python code that runs without X but can't be 
easily edited without a modern, Unicode-aware editor.


 Not only does the Linux virtual terminal not
 support displaying these special characters, but there's also no way of
 inputting them. 

That's a limitation of the Linux virtual terminal. In 1984 I used to use 
a Macintosh which was perfectly capable of displaying and inputting non-
ASCII characters with a couple of key presses. Now that we're nearly a 
quarter of the way into 2011, I'm using a Linux PC that makes entering a 
degree sign or a pound sign a major undertaking, if it's even possible at 
all. It's well past time for Linux to catch up with the 1980s.


 On top of that, these special characters require more
 bytes to display than ascii text, which would bloat source files
 unnecessarily.

Oh come on now, now you're just being silly. Bloat source files? From a 
handful of double-byte characters? Cry me a river!

This is truly a trivial worry:

 s = if x = y:\n
 u = uif x ≥ y:\n
 len(s)
11
 len(u.encode('utf-8'))
12


The source code to the decimal module in Python 3.1 is 205470 bytes in 
size. It contains 63 instances of = and 62 of =. Let's suppose 
every one of those were changed to ≥ or ≤ characters. This would bloat 
the file by 0.06%.

Oh the humanity!!! How will my 2TB hard drive cope?!?!


 You say we have symbol congestion, but in reality we have our own symbol
 bloat. Japanese has more or less than three punctuation marks, while
 English has perhaps more than the alphabet! The fundamental point here
 is using non-ascii operators violates the Zen of Python. It violates
 Simple is better than complex, as well as There should be one-- and
 preferably only one --obvious way to do it.

Define simple and complex in this context.

It seems to me that single character symbols such as ≥ are simpler than 
digraphs such as =, simply because the parser knows what the symbol is 
after reading a single character. It doesn't have to read on to tell 
whether you meant  or =. 

You can add complexity to one part of the language (hash tables are more 
complex than arrays) in order to simplify another part (dict lookup is 
simpler and faster than managing your own data structure in a list).

And as for one obvious way, there's nothing obvious about using a | b for 
set union. Why not a + b? The mathematician in me wants to spell set 
union and intersection as a ⋃ b ⋂ c, which is the obvious way to me (even 
if my lousy editor makes it a PITA to *enter* the symbols).

The lack of good symbols for operators in ASCII is a real problem. Other 
languages have solved it in various ways, sometimes by using digraphs (or 
higher-order symbols), and sometimes by using Unicode (or some platform/
language specific equivalent). I think that given the poor support for 
Unicode in the general tools we use, the use of non-ASCII symbols will 
have to wait until Python4. Hopefully by 2020 input methods will have 
improved, and maybe even xorg be replaced by something less sucky. 

I think that the push for better and more operators will have to come 
from the Numpy community. Further reading:


http://mail.python.org/pipermail/python-dev/2008-November/083493.html


-- 
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Westley Martínez
On Sat, 2011-02-19 at 01:01 +, Steven D'Aprano wrote:
 On Fri, 18 Feb 2011 11:16:30 -0800, Westley Martínez wrote:
 
  Allowing non-ascii characters as operators is a silly idea simply
  because if xorg breaks, which it's very likely to do with the current
  video drivers, I'm screwed. 
 
 And if your hard drive crashes, you're screwed too. Why stop at xorg 
 breaks?
Because I can still edit text files in the terminal.

I guess you could manually control the magnet in the hard-drive if it
failed but that'd be horrifically tedious.


 Besides, Windows and MacOS users will be scratching their head asking 
 xorg? Why should I care about xorg?
Why should I care if my programs run on Windows and Mac? Because I'm a
nice guy I guess


 Programming languages are perfectly allowed to rely on the presence of a 
 working environment. I don't think general purpose programming languages 
 should be designed with reliability in the presence of a broken 
 environment in mind.
 
 Given the use-cases people put Python to, it is important for the 
 language to *run* without a GUI environment. It's also important (but 
 less so) to allow people to read and/or write source code without a GUI, 
 which means continuing to support the pure-ASCII syntax that Python 
 already supports. But Python already supports non-ASCII source files, 
 with an optional encoding line in the first two lines of the file, so it 
 is already possible to write Python code that runs without X but can't be 
 easily edited without a modern, Unicode-aware editor.
 
  Not only does the Linux virtual terminal not
  support displaying these special characters, but there's also no way of
  inputting them. 
 
 That's a limitation of the Linux virtual terminal. In 1984 I used to use 
 a Macintosh which was perfectly capable of displaying and inputting non-
 ASCII characters with a couple of key presses. Now that we're nearly a 
 quarter of the way into 2011, I'm using a Linux PC that makes entering a 
 degree sign or a pound sign a major undertaking, if it's even possible at 
 all. It's well past time for Linux to catch up with the 1980s.

I feel it's unnecessary for Linux to catch up simply because we have
no need for these special characters! When I read Python code, I only
see text from Latin-1, which is easy to input and every *decent* font
supports it. When I read C code, I only see text from Latin-1. When I
read code from just about everything else that's plain text, I only see
text from Latin-1. Even Latex, which is designed for typesetting
mathematical formulas, only allows ASCII in its input. Languages that
accept non-ASCII input have always been somewhat esoteric. There's
nothing wrong with being different, but there is something wrong in
being so different that your causing problems or at least speed bumps
for particular users.


  On top of that, these special characters require more
  bytes to display than ascii text, which would bloat source files
  unnecessarily.
 
 Oh come on now, now you're just being silly. Bloat source files? From a 
 handful of double-byte characters? Cry me a river!
 
 This is truly a trivial worry:
 
  s = if x = y:\n
  u = uif x ≥ y:\n
  len(s)
 11
  len(u.encode('utf-8'))
 12
 
 
 The source code to the decimal module in Python 3.1 is 205470 bytes in 
 size. It contains 63 instances of = and 62 of =. Let's suppose 
 every one of those were changed to ≥ or ≤ characters. This would bloat 
 the file by 0.06%.
 
 Oh the humanity!!! How will my 2TB hard drive cope?!?!
A byte saved is a byte earned. What about embedded systems trying to
conserve as much resources as possible?


  You say we have symbol congestion, but in reality we have our own symbol
  bloat. Japanese has more or less than three punctuation marks, while
  English has perhaps more than the alphabet! The fundamental point here
  is using non-ascii operators violates the Zen of Python. It violates
  Simple is better than complex, as well as There should be one-- and
  preferably only one --obvious way to do it.
 
 Define simple and complex in this context.
 
 It seems to me that single character symbols such as ≥ are simpler than 
 digraphs such as =, simply because the parser knows what the symbol is 
 after reading a single character. It doesn't have to read on to tell 
 whether you meant  or =. 
 
 You can add complexity to one part of the language (hash tables are more 
 complex than arrays) in order to simplify another part (dict lookup is 
 simpler and faster than managing your own data structure in a list).
I believe dealing with ASCII is simpler than dealing with Unicode, for
reasons on both the developer's and user's side.


 And as for one obvious way, there's nothing obvious about using a | b for 
 set union. Why not a + b? The mathematician in me wants to spell set 
 union and intersection as a ⋃ b ⋂ c, which is the obvious way to me (even 
 if my lousy editor makes it a PITA to *enter* the symbols).
Not all programmers are mathematicians 

Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Paul Rubin
Westley Martínez aniko...@gmail.com writes:
 When I read Python code, I only
 see text from Latin-1, which is easy to input and every *decent* font
 supports it. When I read C code, I only see text from Latin-1. When I
 read code from just about everything else that's plain text, I only see
 text from Latin-1. Even Latex, which is designed for typesetting
 mathematical formulas, only allows ASCII in its input. Languages that
 accept non-ASCII input have always been somewhat esoteric.

Maybe we'll see more of them as time goes by.  C, Python, and Latex all
predate Unicode by a number of years.  If Latex were written today it
would probably accept Unicode for math symbols, accented and non-Latin
characters, etc.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Steven D'Aprano
On Fri, 18 Feb 2011 18:14:32 -0800, Westley Martínez wrote:

 Besides, Windows and MacOS users will be scratching their head asking
 xorg? Why should I care about xorg?
 Why should I care if my programs run on Windows and Mac? Because I'm a
 nice guy I guess

Python is a programming language that is operating system independent, 
and not just a Linux tool. So you might not care about your Python 
programs running on Windows, but believe me, the Python core developers 
care about Python running on Windows and Mac OS. (Even if sometimes their 
lack of resources make Windows and Mac somewhat second-class citizens.)


 That's a limitation of the Linux virtual terminal. In 1984 I used to
 use a Macintosh which was perfectly capable of displaying and inputting
 non- ASCII characters with a couple of key presses. Now that we're
 nearly a quarter of the way into 2011, I'm using a Linux PC that makes
 entering a degree sign or a pound sign a major undertaking, if it's
 even possible at all. It's well past time for Linux to catch up with
 the 1980s.
 
 I feel it's unnecessary for Linux to catch up simply because we have
 no need for these special characters!

Given that your name is Westley Martínez, that's an astonishing claim! 
How do you even write your name in your own source code???

Besides, speak for yourself, not for we. I have need for them.


 When I read Python code, I only
 see text from Latin-1, which is easy to input 

Hmmm. I wish I knew an easy way to input it. All the solutions I've come 
across are rubbish. How do you enter (say) í at the command line of a 
xterm?

But in any case, ASCII != Latin-1, so you're already using more than 
ASCII characters.


 Languages that
 accept non-ASCII input have always been somewhat esoteric.

Then I guess Python is esoteric, because with source code encodings it 
supports non-ASCII literals and even variables:

[steve@sylar ~]$ cat encoded.py
# -*- coding: utf-8 -*-
résumé = Some text here...
print(résumé)

[steve@sylar ~]$ python3.1 encoded.py
Some text here...

 
[...]
 A byte saved is a byte earned. What about embedded systems trying to
 conserve as much resources as possible?

Then they don't have to use multi-byte characters, just like they can 
leave out comments, and .pyo files, and use `ed` for their standard text 
editor instead of something bloated like vi or emacs.

[...]
 I believe dealing with ASCII is simpler than dealing with Unicode, for
 reasons on both the developer's and user's side.

Really? Well, I suppose if you want to define you can't do this AT ALL 
as simpler, then, yes, ASCII is simpler.

Using pure-ASCII means I am forced to write extra code because there 
aren't enough operators to be useful, e.g. element-wise addition versus 
concatenation. It means I'm forced to spell out symbols in full, like 
British pound instead of £, and use legally dubious work-arounds like 
(c) instead of ©, and mispell words (including people's names) because 
I can't use the correct characters, and am forced to use unnecessarily 
long and clumsy English longhand for standard mathematical notation.

If by simple you mean I can't do what I want to do, then I agree 
completely that ASCII is simple.


 And as for one obvious way, there's nothing obvious about using a | b
 for set union. Why not a + b? The mathematician in me wants to spell
 set union and intersection as a ⋃ b ⋂ c, which is the obvious way to me
 (even if my lousy editor makes it a PITA to *enter* the symbols).
 
 Not all programmers are mathematicians (in fact I'd say most aren't). I
 know what those symbols mean, but some people might think a u b n c ...
 what? | actually makes sense because it relates to bitwise OR in which
 bits are turned on.

Not all programmers are C programmers who have learned that | represents 
bitwise OR. Some will say a | b ... what?. I know I did, when I was 
first learning Python, and I *still* need to look them up to be sure I 
get them right.

In other languages, | might be spelled as any of

bitor() OR .OR. || ∧


[...]
 Being a person who has
 had to deal with the í in my last name and Japanese text on a variety of
 platforms, I've found the current methods of non-ascii input to be
 largely platform-dependent and---for lack of a better word---crappy,

Agreed one hundred percent! Until there are better input methods for non-
ASCII characters, without the need for huge keyboards, Unicode is hard 
and ASCII easy, and Python can't *rely* on Unicode tokens.

That doesn't mean that languages like Python can't support Unicode 
tokens, only that they shouldn't be the only way to do things. For a long 
time Pascal include (* *) as a synonym for { } because not all keyboards 
included the { } characters, and C has support for trigraphs:

http://publications.gbdirect.co.uk/c_book/chapter2/alphabet_of_c.html

Eventually, perhaps in another 20 years, digraphs like != and = will go 
the same way as trigraphs. Just as people today find it hard to remember 
a time when 

Re: Problems of Symbol Congestion in Computer Languages

2011-02-18 Thread Westley Martínez
On Sat, 2011-02-19 at 06:29 +, Steven D'Aprano wrote:
 On Fri, 18 Feb 2011 18:14:32 -0800, Westley Martínez wrote:
 
  Besides, Windows and MacOS users will be scratching their head asking
  xorg? Why should I care about xorg?
  Why should I care if my programs run on Windows and Mac? Because I'm a
  nice guy I guess
 
 Python is a programming language that is operating system independent, 
 and not just a Linux tool. So you might not care about your Python 
 programs running on Windows, but believe me, the Python core developers 
 care about Python running on Windows and Mac OS. (Even if sometimes their 
 lack of resources make Windows and Mac somewhat second-class citizens.)

You didn't seem to get my humor. It's ok; most people don't.


  That's a limitation of the Linux virtual terminal. In 1984 I used to
  use a Macintosh which was perfectly capable of displaying and inputting
  non- ASCII characters with a couple of key presses. Now that we're
  nearly a quarter of the way into 2011, I'm using a Linux PC that makes
  entering a degree sign or a pound sign a major undertaking, if it's
  even possible at all. It's well past time for Linux to catch up with
  the 1980s.
  
  I feel it's unnecessary for Linux to catch up simply because we have
  no need for these special characters!
 
 Given that your name is Westley Martínez, that's an astonishing claim! 
 How do you even write your name in your own source code???
 
 Besides, speak for yourself, not for we. I have need for them.

The í is easy to input. (Vim has a diacritic feature) It's the funky
mathematical symbols that are difficult.


  When I read Python code, I only
  see text from Latin-1, which is easy to input 
 
 Hmmm. I wish I knew an easy way to input it. All the solutions I've come 
 across are rubbish. How do you enter (say) í at the command line of a 
 xterm?

I use this in my xorg.conf:

Section InputDevice
Identifier  Keyboard0
Driver  kbd
Option  XkbLayout us
Option  XkbVariantdvorak-alt-intl
EndSection

Simply remove 'dvorak-' to get qwerty. It allows you to use the right
Alt key as AltGr. For example:
AltGr+' i = í
AltGr+c = ç
AltGr+s = ß

I don't work on Windows or Mac enough to have figured out how to do on
those platforms, but I'm sure there's a simple way.
Again, it's the funky symbols that would be difficult to input.

 But in any case, ASCII != Latin-1, so you're already using more than 
 ASCII characters.
 
 
  Languages that
  accept non-ASCII input have always been somewhat esoteric.
 
 Then I guess Python is esoteric, because with source code encodings it 
 supports non-ASCII literals and even variables:
 
 [steve@sylar ~]$ cat encoded.py
 # -*- coding: utf-8 -*-
 résumé = Some text here...
 print(résumé)
 
 [steve@sylar ~]$ python3.1 encoded.py
 Some text here...

I should reword that to Languages that require non-ASCII input have
always been somewhat esoteric i.e. APL.


 [...]
  A byte saved is a byte earned. What about embedded systems trying to
  conserve as much resources as possible?
 
 Then they don't have to use multi-byte characters, just like they can 
 leave out comments, and .pyo files, and use `ed` for their standard text 
 editor instead of something bloated like vi or emacs.

Hey, I've heard of jobs where all you do is remove comments from source
code, believe it or not!


 [...]
  I believe dealing with ASCII is simpler than dealing with Unicode, for
  reasons on both the developer's and user's side.
 
 Really? Well, I suppose if you want to define you can't do this AT ALL 
 as simpler, then, yes, ASCII is simpler.
 
 Using pure-ASCII means I am forced to write extra code because there 
 aren't enough operators to be useful, e.g. element-wise addition versus 
 concatenation. It means I'm forced to spell out symbols in full, like 
 British pound instead of £, and use legally dubious work-arounds like 
 (c) instead of ©, and mispell words (including people's names) because 
 I can't use the correct characters, and am forced to use unnecessarily 
 long and clumsy English longhand for standard mathematical notation.
 
 If by simple you mean I can't do what I want to do, then I agree 
 completely that ASCII is simple.

I guess it's a matter of taste. I don't mind seeing my name as
westley_martinez and am so use to seeing **, sqrt(), and / that seeing
the original symbols is a bit foreign!


  And as for one obvious way, there's nothing obvious about using a | b
  for set union. Why not a + b? The mathematician in me wants to spell
  set union and intersection as a ⋃ b ⋂ c, which is the obvious way to me
  (even if my lousy editor makes it a PITA to *enter* the symbols).
  
  Not all programmers are mathematicians (in fact I'd say most aren't). I
  know what those symbols mean, but some people might think a u b n c ...
  what? | actually makes sense because it relates to bitwise OR in which
  bits are turned on.
 
 Not all programmers are C programmers who have 

Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread rantingrick
On Feb 16, 4:07 pm, Xah Lee xah...@gmail.com wrote:
 Vast majority of computer languages use ASCII as its character set.
 This means, it jams multitude of operators into about 20 symbols.
 Often, a symbol has multiple meanings depending on contex.

I think in theory the idea of using Unicode chars is good, however in
reality the implementation would be a nightmare!  A wise man once
said: The road to hell is paved in good intentions. ;-)

If we consider all the boundaries that exist between current
(programming) languages (syntax, IDE's, paradigms, etc) then we will
realize that adding *more* symbols does not help, no, it actually
hinders! And Since Unicode is just a hodgepodge encoding of many
regional (natural) languages --of which we have too many already in
this world!-- proliferating Unicode symbols in source code only serves
to further complicate our lives with even *more* multiplicity!

Those of us on the *inside* know that Unicode is nothing more than an
poor attempt to monkey patch multiplicity. And that statement barely
scratches the surface of an underlying disease that plagues all of
human civilization. The root case is selfishness, which *then*
propagates up and manifests itself as multiplicity in our everyday
lives. It starts as the simple selfish notion of me against other
and then extrapolates exponentially into the collective of we
against others.

This type of grouping --or selfish typecasting if you will-- is
impeding the furtherer evolution of homo sapiens. Actually we are
moving at a snails pace when we could be moving at the speed of light!
We *should* be evolving as a genetic algorithm but instead we are the
ignorant slaves of our own collective selfishness reduced to naive and
completely random implementations of bozosort!

Now don't misunderstand all of this as meaning multiplicity is bad,
because i am not suggesting any such thing! On the contrary,
multiplicity is VERY important in emerging problem domains. Before
such a domain is understood by the collective unconscience we need
options (multiplicity!) from which to choose from. However, once a
collective understanding is reached we must reign in the
multiplicity or it will become yet another millstone around our
evolutionary necks, slowing our evolution.

But multiplicity is just the very beginning of a downward spiral of
devolution. Once you allow multiplicity to become the sport of
Entropy, it may be too late for recovery! Entropy leads to shock
(logical disorder) which then leads to stagnation (no logical order at
all!). At this point we loose all forward momentum in our evolution.
And why? Because of nothing more than self gratifying SELFISHNESS.

Anyone with half a brain understands the metric system is far superior
(on many levels) then any of the other units of measurement. However
again we have failed to reign in the multiplicity and so entropy has
run a muck, and we are like a deer caught-in-the-headlights of the
shock of our self induced devolution and simultaneously entirely
incapable of seeing the speeding mass that is about to plow over us
with a tremendous kinetic energy -- evolutionary stagnation!

Sadly this disease of selfishness infects many aspects of the human
species to the very detriment of our collective evolution. Maybe one
day we will see the light of logic and choose to unite in a collective
evolution. Even after thousands of years we are but infants on the
evolutionary scale because we continue to feed the primal urges of
selfishness.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Cthun

On 17/02/2011 9:11 PM, rantingrick wrote:
. On Feb 16, 4:07 pm, Xah Leexah...@gmail.com  wrote:
. Vast majority of computer languages use ASCII as its character set.
. This means, it jams multitude of operators into about 20 symbols.
. Often, a symbol has multiple meanings depending on contex.
.
. I think in theory the idea of using Unicode chars is good, however in
. reality the implementation would be a nightmare!  A wise man once
. said: The road to hell is paved in good intentions. ;-)
.
. If we consider all the boundaries that exist between current
. (programming) languages (syntax, IDE's, paradigms, etc) then we will
. realize that adding *more* symbols does not help, no, it actually
. hinders! And Since Unicode is just a hodgepodge encoding of many
. regional (natural) languages --of which we have too many already in
. this world!

What does your aversion to cultural diversity have to do with Lisp, 
rantingrick? Gee, I do hope you're not a racist, rantingrick.


. -- proliferating Unicode symbols in source code only serves
. to further complicate our lives with even *more* multiplicity!
.
. Those of us on the *inside* know that Unicode is nothing more than an
. poor attempt to monkey patch multiplicity. And that statement barely
. scratches the surface of an underlying disease that plagues all of
. human civilization. The root case is selfishness, which *then*
. propagates up and manifests itself as multiplicity in our everyday
. lives. It starts as the simple selfish notion of me against other
. and then extrapolates exponentially into the collective of we
. against others.
.
. This type of grouping --or selfish typecasting if you will-- is
. impeding the furtherer evolution of homo sapiens. Actually we are
. moving at a snails pace when we could be moving at the speed of light!
. We *should* be evolving as a genetic algorithm but instead we are the
. ignorant slaves of our own collective selfishness reduced to naive and
. completely random implementations of bozosort!

What does that have to do with Lisp, rantingrick?

. Now don't misunderstand all of this as meaning multiplicity is bad,
. because i am not suggesting any such thing! On the contrary,
. multiplicity is VERY important in emerging problem domains. Before
. such a domain is understood by the collective unconscience we need
. options (multiplicity!) from which to choose from. However, once a
. collective understanding is reached we must reign in the
. multiplicity or it will become yet another millstone around our
. evolutionary necks, slowing our evolution.

Classic illogic. Evolution depends upon diversity as grist for the mill 
of selection, rantingrick. A genetically homogeneous population cannot 
undergo allele frequency shifts, rantingrock.


. But multiplicity is just the very beginning of a downward spiral of
. devolution. Once you allow multiplicity to become the sport of
. Entropy, it may be too late for recovery! Entropy leads to shock
. (logical disorder) which then leads to stagnation (no logical order at
. all!). At this point we loose all forward momentum in our evolution.
. And why? Because of nothing more than self gratifying SELFISHNESS.
.
. Anyone with half a brain understands the metric system is far superior
. (on many levels) then any of the other units of measurement. However
. again we have failed to reign in the multiplicity and so entropy has
. run a muck, and we are like a deer caught-in-the-headlights of the
. shock of our self induced devolution and simultaneously entirely
. incapable of seeing the speeding mass that is about to plow over us
. with a tremendous kinetic energy -- evolutionary stagnation!
.
. Sadly this disease of selfishness infects many aspects of the human
. species to the very detriment of our collective evolution. Maybe one
. day we will see the light of logic and choose to unite in a collective
. evolution. Even after thousands of years we are but infants on the
. evolutionary scale because we continue to feed the primal urges of
. selfishness.

What does any of that have to do with Lisp, rantingrick?

And you omitted the #1 most serious objection to Xah's proposal, 
rantingrick, which is that to implement it would require unrealistic 
things such as replacing every 101-key keyboard with 10001-key keyboards 
and training everyone to use them. Xah would have us all replace our 
workstations with machines that resemble pipe organs, rantingrick, or 
perhaps the cockpits of the three surviving Space Shuttles. No doubt 
they'd be enormously expensive, as well as much more difficult to learn 
to use, rantingrick.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Dan Stromberg
I prefer special symbol congestion to special symbol proliferation.  A
lot.  A language with few special symbols looks less like line noise, is
easier to read and write, and is easier to google for answers about.

I guess nothing's perfect.

On Wed, Feb 16, 2011 at 2:07 PM, Xah Lee xah...@gmail.com wrote:

 might be interesting.

 〈Problems of Symbol Congestion in Computer Languages (ASCII Jam;
 Unicode; Fortress)〉
 http://xahlee.org/comp/comp_lang_unicode.html

 --
 Problems of Symbol Congestion in Computer Languages (ASCII Jam;
 Unicode; Fortress)

 Xah Lee, 2011-02-05, 2011-02-15

 Vast majority of computer languages use ASCII as its character set.
 This means, it jams multitude of operators into about 20 symbols.
 Often, a symbol has multiple meanings depending on contex. Also, a
 sequence of chars are used as a single symbol as a workaround for lack
 of symbols. Even for languages that use Unicode as its char set (e.g.
 Java, XML), often still use the ~20 ASCII symbols for all its
 operators. The only exceptions i know of are Mathematica, Fortress,
 APL. This page gives some examples of problems created by symbol
 congestion.

 ---
 Symbol Congestion Workarounds

 
 Multiple Meanings of a Symbol

 Here are some common examples of a symbol that has multiple meanings
 depending on context:

 In Java, [ ] is a delimiter for array, also a delimiter for getting a
 element of array, also as part of the syntax for declaring a array
 type.

 In Java and many other langs, ( ) is used for expression grouping,
 also as delimiter for arguments of a function call, also as delimiters
 for parameters of a function's declaration.

 In Perl and many other langs, : is used as a separator in a ternary
 expression e.g. (test ? yes : no), also as a namespace separator
 (e.g. use Data::Dumper;).

 In URL, / is used as path separators, but also as indicator of
 protocol. e.g. http://example.org/comp/unicode.html

 In Python and many others,  is used for “less than” boolean operator,
 but also as a alignment flag in its “format” method, also as a
 delimiter of named group in regex, and also as part of char in other
 operators that are made of 2 chars, e.g.:  = = .

 
 Examples of Multip-Char Operators

 Here are some common examples of operators that are made of multiple
 characters: ||  == = != ** =+ =* := ++ -- :: // /* (* …

 ---
 Fortress & Unicode

 The language designer Guy Steele recently gave a very interesting
 talk. See: Guy Steele on Parallel Programing. In it, he showed code
 snippets of his language Fortress, which freely uses Unicode as
 operators.

 For example, list delimiters are not the typical curly bracket {1,2,3}
 or square bracket [1,2,3], but the unicode angle bracket ⟨1,2,3⟩.
 (See: Matching Brackets in Unicode.) It also uses the circle plus ⊕ as
 operator. (See: Math Symbols in Unicode.)

 ---
 Problems of Symbol Congestion

 I really appreciate such use of unicode. The tradition of sticking to
 the 95 chars in ASCII of 1960s is extremely limiting. It creates
 complex problems manifested in:

* String Escape mechanism (C's backslash \n, \/, …, widely
 adopted.)
* Complex delimiters for strings. (Python's triple quotes and
 perl's variable delimiters q() q[] q{} m//, and heredoc. (See: Strings
 in Perl and Python ◇ Heredoc mechanism in PHP and Perl.)
* Crazy leaning toothpicks syndrome, especially bad in emacs
 regex.
* Complexities in character representation (See: Emacs's Key
 Notations Explained (/r, ^M, C-m, RET, return, M-, meta) ◇ HTML
 entities problems. See: HTML Entities, Ampersand, Unicode, Semantics.)
* URL Percent Encoding problems and complexities: Javascript
 Encode URL, Escape String

 All these problems occur because we are jamming so many meanings into
 about 20 symbols in ASCII.

 See also:

* Computer Language Design: Strings Syntax
* HTML6: Your JSON and SXML Simplified

 Most of today's languages do not support unicode in function or
 variable names, so you can forget about using unicode in variable
 names (e.g. α=3) or function names (e.g. “lambda” as “λ” or “function”
 as “ƒ”), or defining your own operators (e.g. “⊕”).

 However, there are a few languages i know that do support unicode in
 function or variable names. Some of these allow you to define your own
 operators. However, they may not allow unicode for the operator
 symbol. See: Unicode Support in Ruby, Perl, Python, javascript, Java,
 Emacs Lisp, Mathematica.

  Xah
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Cor Gest
Some entity, AKA Cthun cthun_...@qmail.net.au,
wrote this mindboggling stuff: 
 (selectively-snipped-or-not-p)


 And you omitted the #1 most serious objection to Xah's proposal,
 rantingrick, which is that to implement it would require unrealistic
 things such as replacing every 101-key keyboard with 10001-key
 keyboards and training everyone to use them. Xah would have us all
 replace our workstations with machines that resemble pipe organs,
 rantingrick, or perhaps the cockpits of the three surviving Space
 Shuttles. No doubt they'd be enormously expensive, as well as much
 more difficult to learn to use, rantingrick.

Atleast it should try to mimick a space-cadet keyboard, shouldn't it? 

Cor

-- 
Monosyllabisch antwoorden is makkelijker ik kan mij zelfs melk veroorloven
 Geavanceerde politieke correctheid is niet te onderscheiden van sarcasme
First rule of enaging in a gunfight:  HAVE A GUN
SPAM DELENDA EST http://www.spammesenseless.nl
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread rantingrick
On Feb 17, 8:40 pm, Cthun cthun_...@qmail.net.au wrote:

 What does your aversion to cultural diversity have to do with Lisp,
 rantingrick? Gee, I do hope you're not a racist, rantingrick.

Why must language be constantly connected-at-the-hip to cultural
diversity? People have this irrational fear that if we create a single
universal language then *somehow* freedom have been violated.

You *do* understand that language is just a means of communication,
correct? And i would say a very inefficient means. However, until
telekinesis becomes common-place the only way us humans have to
communicate is through a fancy set of grunts and groans. Since that is
the current state of our communication thus far, would it not be
beneficial that at least we share a common world wide mapping of this
noise making?

sarcasm Hey, wait, i have an idea... maybe some of us should drive
on the right side of the road and some on the left. This way we can be
unique (psst: SELFISH) from one geographic location on the earth to
another geographic location on the earth. Surely this multiplicity
would not cause any problems? Because, heck, selfishness is so much
more important than anyones personal safety anyway /sarcasm

Do you see how this morphs into a foolish consistency?

 . Now don't misunderstand all of this as meaning multiplicity is bad,
 . because i am not suggesting any such thing! On the contrary,
 . multiplicity is VERY important in emerging problem domains. Before
 . such a domain is understood by the collective unconscience we need
 . options (multiplicity!) from which to choose from. However, once a
 . collective understanding is reached we must reign in the
 . multiplicity or it will become yet another millstone around our
 . evolutionary necks, slowing our evolution.

 Classic illogic. Evolution depends upon diversity as grist for the mill
 of selection, rantingrick. A genetically homogeneous population cannot
 undergo allele frequency shifts, rantingrock.

Oh, maybe you missed this paragraph:

 . Now don't misunderstand all of this as meaning multiplicity is
bad,
 . because i am not suggesting any such thing! On the contrary,
 . multiplicity is VERY important in emerging problem domains. Before
 . such a domain is understood by the collective unconscience we need
 . options (multiplicity!) from which to choose from. However, once a
 . collective understanding is reached we must reign in the
 . multiplicity or it will become yet another millstone around our
 . evolutionary necks, slowing our evolution.

Or maybe this one:

. I think in theory the idea of using Unicode chars is good, however
in
. reality the implementation would be a nightmare!  A wise man once
. said: The road to hell is paved in good intentions. ;-)

Or this one:

. If we consider all the boundaries that exist between current
. (programming) languages (syntax, IDE's, paradigms, etc) then we
will
. realize that adding *more* symbols does not help, no, it actually
. hinders! And Since Unicode is just a hodgepodge encoding of many
. regional (natural) languages --of which we have too many already in
. this world!

 What does any of that have to do with Lisp, rantingrick?

The topic is *ahem*... Problems of Symbol Congestion in Computer
Languages... of which i think is not only a lisp issue but an issue
of any language. (see my comments about selfishness for insight)

 And you omitted the #1 most serious objection to Xah's proposal,
 rantingrick, which is that to implement it would require unrealistic
 things such as replacing every 101-key keyboard with 10001-key keyboards
 and training everyone to use them. Xah would have us all replace our
 workstations with machines that resemble pipe organs, rantingrick, or
 perhaps the cockpits of the three surviving Space Shuttles. No doubt
 they'd be enormously expensive, as well as much more difficult to learn
 to use, rantingrick.

Yes, if you'll read my entire post then you'll clearly see that i
disagree with Mr Lee on using Unicode chars in source code. My
intention was to educate him on the pitfalls of multiplicity.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Littlefield, Tyler

My intention was to educate him on the pitfalls of multiplicity.
O. that's what you call that long-winded nonsense? Education? You must 
live in America. Can I hazard a guess that your universal language might 
be english? Has it not ever occured to you that people take pride in 
their language? It is part of their culture. And yet you rant on about 
selfishness?

On 2/17/2011 8:29 PM, rantingrick wrote:

On Feb 17, 8:40 pm, Cthuncthun_...@qmail.net.au  wrote:


What does your aversion to cultural diversity have to do with Lisp,
rantingrick? Gee, I do hope you're not a racist, rantingrick.

Why must language be constantly connected-at-the-hip to cultural
diversity? People have this irrational fear that if we create a single
universal language then *somehow* freedom have been violated.

You *do* understand that language is just a means of communication,
correct? And i would say a very inefficient means. However, until
telekinesis becomes common-place the only way us humans have to
communicate is through a fancy set of grunts and groans. Since that is
the current state of our communication thus far, would it not be
beneficial that at least we share a common world wide mapping of this
noise making?

sarcasm  Hey, wait, i have an idea... maybe some of us should drive
on the right side of the road and some on the left. This way we can be
unique (psst: SELFISH) from one geographic location on the earth to
another geographic location on the earth. Surely this multiplicity
would not cause any problems? Because, heck, selfishness is so much
more important than anyones personal safety anyway/sarcasm

Do you see how this morphs into a foolish consistency?


. Now don't misunderstand all of this as meaning multiplicity is bad,
. because i am not suggesting any such thing! On the contrary,
. multiplicity is VERY important in emerging problem domains. Before
. such a domain is understood by the collective unconscience we need
. options (multiplicity!) from which to choose from. However, once a
. collective understanding is reached we must reign in the
. multiplicity or it will become yet another millstone around our
. evolutionary necks, slowing our evolution.

Classic illogic. Evolution depends upon diversity as grist for the mill
of selection, rantingrick. A genetically homogeneous population cannot
undergo allele frequency shifts, rantingrock.

Oh, maybe you missed this paragraph:

  . Now don't misunderstand all of this as meaning multiplicity is
bad,
  . because i am not suggesting any such thing! On the contrary,
  . multiplicity is VERY important in emerging problem domains. Before
  . such a domain is understood by the collective unconscience we need
  . options (multiplicity!) from which to choose from. However, once a
  . collective understanding is reached we must reign in the
  . multiplicity or it will become yet another millstone around our
  . evolutionary necks, slowing our evolution.

Or maybe this one:

. I think in theory the idea of using Unicode chars is good, however
in
. reality the implementation would be a nightmare!  A wise man once
. said: The road to hell is paved in good intentions. ;-)

Or this one:

. If we consider all the boundaries that exist between current
. (programming) languages (syntax, IDE's, paradigms, etc) then we
will
. realize that adding *more* symbols does not help, no, it actually
. hinders! And Since Unicode is just a hodgepodge encoding of many
. regional (natural) languages --of which we have too many already in
. this world!


What does any of that have to do with Lisp, rantingrick?

The topic is *ahem*... Problems of Symbol Congestion in Computer
Languages... of which i think is not only a lisp issue but an issue
of any language. (see my comments about selfishness for insight)


And you omitted the #1 most serious objection to Xah's proposal,
rantingrick, which is that to implement it would require unrealistic
things such as replacing every 101-key keyboard with 10001-key keyboards
and training everyone to use them. Xah would have us all replace our
workstations with machines that resemble pipe organs, rantingrick, or
perhaps the cockpits of the three surviving Space Shuttles. No doubt
they'd be enormously expensive, as well as much more difficult to learn
to use, rantingrick.

Yes, if you'll read my entire post then you'll clearly see that i
disagree with Mr Lee on using Unicode chars in source code. My
intention was to educate him on the pitfalls of multiplicity.





--

Thanks,
Ty

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Cthun

On 17/02/2011 10:29 PM, rantingrick wrote:

On Feb 17, 8:40 pm, Cthuncthun_...@qmail.net.au  wrote:


What does your aversion to cultural diversity have to do with Lisp,
rantingrick? Gee, I do hope you're not a racist, rantingrick.


Why must language be constantly connected-at-the-hip to cultural
diversity?


Language is a part of culture, rantingrick.


People have this irrational fear that if we create a single
universal language then *somehow* freedom have been violated.


No, it is that if we stop using the others, or forcibly wipe them out, 
that something irreplaceable will have been lost, rantingrick.



You *do* understand that language is just a means of communication,
correct?


Classic unsubstantiated and erroneous claim. A language is also a 
cultural artifact, rantingrick. If we lose, say, the French language, we 
lose one of several almost-interchangeable means of communication, 
rantingrick. But we also lose something as unique and irreplaceable as 
the original canvas of the Mona Lisa, rantingrick.



And i would say a very inefficient means. However, until
telekinesis becomes common-place the only way us humans have to
communicate is through a fancy set of grunts and groans. Since that is
the current state of our communication thus far, would it not be
beneficial that at least we share a common world wide mapping of this
noise making?


What does your question have to do with Lisp, rantingrick?


sarcasm  Hey, wait, i have an idea... maybe some of us should drive
on the right side of the road and some on the left. This way we can be
unique (psst: SELFISH) from one geographic location on the earth to
another geographic location on the earth.


Classic illogic. Comparing, say, the loss of the French language to 
standardizing on this is like comparing the loss of the Mona Lisa to 
zeroing one single bit in a computer somewhere, rantingrick.



Surely this multiplicity
would not cause any problems? Because, heck, selfishness is so much
more important than anyones personal safety anyway/sarcasm


Non sequitur.


Do you see how this morphs into a foolish consistency?


What does your classic erroneous presupposition have to do with Lisp, 
rantingrick?



Classic illogic. Evolution depends upon diversity as grist for the mill
of selection, rantingrick. A genetically homogeneous population cannot
undergo allele frequency shifts, rantingrock.


Oh, maybe you missed this paragraph


What does your classic erroneous presupposition have to do with Lisp, 
rantingrick?



  . Now don't misunderstand all of this as meaning multiplicity is
bad,
  . because i am not suggesting any such thing! On the contrary,
  . multiplicity is VERY important in emerging problem domains. Before
  . such a domain is understood by the collective unconscience we need
  . options (multiplicity!) from which to choose from. However, once a
  . collective understanding is reached we must reign in the
  . multiplicity or it will become yet another millstone around our
  . evolutionary necks, slowing our evolution.


Classic erroneous presupposition that evolution is supposed to reach a 
certain point and then stop and stagnate on a single universal standard, 
rantingrick.



Or maybe this one:

. I think in theory the idea of using Unicode chars is good, however
in
. reality the implementation would be a nightmare!  A wise man once
. said: The road to hell is paved in good intentions. ;-)


Classic unsubstantiated and erroneous claim. I read that one, rantingrick.


Or this one:

. If we consider all the boundaries that exist between current
. (programming) languages (syntax, IDE's, paradigms, etc) then we
will
. realize that adding *more* symbols does not help, no, it actually
. hinders! And Since Unicode is just a hodgepodge encoding of many
. regional (natural) languages --of which we have too many already in
. this world!


Classic unsubstantiated and erroneous claim. I read that one, too, 
rantingrick.



What does any of that have to do with Lisp, rantingrick?


The topic is *ahem*... Problems of Symbol Congestion in Computer
Languages... of which i think is not only a lisp issue but an issue
of any language.


Classic illogic. The topic of the *thread* is *computer* languages, yet 
you attacked non-computer languages in the majority of your rant, 
rantingrick. Furthermore, the topic of the *newsgroup* is the *Lisp 
subset* of computer languages.



(see my comments about selfishness for insight)


What does that have to do with Lisp, rantingrick?


And you omitted the #1 most serious objection to Xah's proposal,
rantingrick, which is that to implement it would require unrealistic
things such as replacing every 101-key keyboard with 10001-key keyboards
and training everyone to use them. Xah would have us all replace our
workstations with machines that resemble pipe organs, rantingrick, or
perhaps the cockpits of the three surviving Space Shuttles. No doubt
they'd be enormously expensive, as well as much more difficult

Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread alex23
rantingrick rantingr...@gmail.com wrote:
 Cthun cthun_...@qmail.net.au wrote:

  What does your aversion to cultural diversity have to do with Lisp,
  rantingrick? Gee, I do hope you're not a racist, rantingrick.

 Why must language be constantly connected-at-the-hip to cultural
 diversity? People have this irrational fear that if we create a single
 universal language then *somehow* freedom have been violated.

Because monocultures _die_ and no amount of fascist-like rick-ranting
about a One True Way will ever change that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread rantingrick
On Feb 17, 10:04 pm, alex23 wuwe...@gmail.com wrote:
 rantingrick rantingr...@gmail.com wrote:
  Cthun cthun_...@qmail.net.au wrote:

   What does your aversion to cultural diversity have to do with Lisp,
   rantingrick? Gee, I do hope you're not a racist, rantingrick.

  Why must language be constantly connected-at-the-hip to cultural
  diversity? People have this irrational fear that if we create a single
  universal language then *somehow* freedom have been violated.

 Because monocultures _die_

You lack vision.

What is evolution?

Evolution is the pursuit of perfection at the expense of anything and
everything! But what evolution lacks is self awareness. Evolution is
really a bumbling idiot. Intelligent agents should not emulate
classical evolution, no, we should mold it and shape it ourselves!
Darwinian evolution is an archaic and slow process which takes eons to
bring about even the smallest changes. However evolution guided by
intelligent agents is exponential.

Do you think the earth is anything special? Do you think some talking
monkeys is anything special? Do you think tens of thousands of years
are anything but the blink of an eye? All of what you see and know is
but teeny tiny baby steps in the short life of evolution. The system
has not even made it from the couch to the coffee table yet without
loosing balance and falling on it proverbial buttocks! And you are
concerned with something as small as human cultures and language
diversity! Open your eyes man!

NEWSFLASH: Humans are NOT the prize of the Universe. We are merely a
bump along the way. Stop self aggrandizing for a moment so you can see
reality. That does not mean we cannot contribute great things! But we
are here for a very short time on a galactic scale. Stop being selfish
and realize what you are, who you are, and where you should be going.
Next time you pass an ant hill look down at one the ants and imagine
that the ant is you... then you will *get* it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Chris Jones
On Thu, Feb 17, 2011 at 09:55:47PM EST, Cor Gest wrote:
 Some entity, AKA Cthun cthun_...@qmail.net.au,

[..]

  And you omitted the #1 most serious objection to Xah's proposal,
  rantingrick, which is that to implement it would require unrealistic
  things such as replacing every 101-key keyboard with 10001-key
  keyboards and training everyone to use them. Xah would have us all
  replace our workstations with machines that resemble pipe organs,
  rantingrick, or perhaps the cockpits of the three surviving Space
  Shuttles. No doubt they'd be enormously expensive, as well as much
  more difficult to learn to use, rantingrick.

 At least it should try to mimick a space-cadet keyboard, shouldn't it? 

Implementation details, and not very accurate at that.. the APL keyboard
has not additional keys and yet it has the potential to add up to 100
additional symbols to the US-ASCII keyboard, half of which are produced
via a single modifier.. same as upper-case letters. So unless more than
50+20 = 70 symbols are needed the keyboard conversion would cost about..
what.. $2.00 in stickers and maybe ten minutes to place them.

Maybe the problem lies elsewhere..?

cj

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread rantingrick

When the sun expands into a red giant and destroys the remaining life
left on this pathetic rock and the entire puny human species have been
transcended by the *next* evolution of intelligent agents (hopefully
before going extinct!) do you really think any advanced lifeforms are
going to look back and give a flying fig about the selfishness of the
human species?

Well they did not give much to evolution, but hey, they had TV's,
Elvis, microwave pizza, and muti-culturalism, woo-hoo!

*Your* thought process are an evolutionary dead end my friend! All of
what you see and know is nothing in the greater scheme of things. What
do you think, some future anthropologist is going to dig up human
bones like we do with dinosaur bones? Are you insane? Where will you
dig when the earth is non-existent? Where will you start your research
of human civilization when every piece of matter that proved our
existence has been reformed into energy?

All we have at the end of this gigantic game is that which we can
contribute to the *next step* of evolution. The best we can hope for
is to successfully pass the baton of knowledge to the next runner.
Nothing we create is going to survive time... NOTHING! Not any
structure, not any system of government, not any selfish human sub-
culture, not any piece of art, not even the entire human culture,
nothing!

Only one *thing* is immortal, Evolution. And the only way for us to be
immortal is by offering a substantial building block to that system.
Assimilation is the way of the Universe, like it or not.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread John Nagle

On 2/17/2011 6:55 PM, Cor Gest wrote:

Some entity, AKA Cthuncthun_...@qmail.net.au,
wrote this mindboggling stuff:
  (selectively-snipped-or-not-p)



And you omitted the #1 most serious objection to Xah's proposal,
rantingrick, which is that to implement it would require unrealistic
things such as replacing every 101-key keyboard with 10001-key
keyboards and training everyone to use them. Xah would have us all
replace our workstations with machines that resemble pipe organs,
rantingrick, or perhaps the cockpits of the three surviving Space
Shuttles. No doubt they'd be enormously expensive, as well as much
more difficult to learn to use, rantingrick.


At least it should try to mimick a space-cadet keyboard, shouldn't it?


I've used both the MIT Space Cadet keyboard on a Symbolics LISP 
machine, and the Stanford SAIL keyboard. There's

something to be said for having more mathematical symbols.

Some programs use a bigger character set.  MathCAD,
for example, has a broader range of mathematical symbols on
the input side than ASCII offers.  They're not decorative;
MathCAD has different = symbols for assignment, algebraic
equivalence, identity, and comparison.

I've previously mentioned that Python suffers in a few places
from unwanted overloading.  Using + for concatenation of
strings, then extending that to vectors, resulted in undesirable
semantics.  + on arrays from numpy, and on built-in vectors
behave quite differently.  A dedicated concatenation operator
would have avoided that mess.

C++ has worse problems, because it uses  and  as both
brackets and operators.  This does horrible things to the syntax.

However, adding a large number of new operators or new
bracket types is probably undesirable.

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Westley Martínez
On Thu, 2011-02-17 at 21:38 -0800, rantingrick wrote:
 rant /
That was a very insightful point-of-view. Perhaps you should write a
book or blog, as I'd be very interested in reading more about what you
have to say.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Westley Martínez
On Thu, 2011-02-17 at 22:28 -0500, Chris Jones wrote:
 On Thu, Feb 17, 2011 at 09:55:47PM EST, Cor Gest wrote:
  Some entity, AKA Cthun cthun_...@qmail.net.au,
 
 [..]
 
   And you omitted the #1 most serious objection to Xah's proposal,
   rantingrick, which is that to implement it would require unrealistic
   things such as replacing every 101-key keyboard with 10001-key
   keyboards and training everyone to use them. Xah would have us all
   replace our workstations with machines that resemble pipe organs,
   rantingrick, or perhaps the cockpits of the three surviving Space
   Shuttles. No doubt they'd be enormously expensive, as well as much
   more difficult to learn to use, rantingrick.
 
  At least it should try to mimick a space-cadet keyboard, shouldn't it? 
 
 Implementation details, and not very accurate at that.. the APL keyboard
 has not additional keys and yet it has the potential to add up to 100
 additional symbols to the US-ASCII keyboard, half of which are produced
 via a single modifier.. same as upper-case letters. So unless more than
 50+20 = 70 symbols are needed the keyboard conversion would cost about..
 what.. $2.00 in stickers and maybe ten minutes to place them.
 
 Maybe the problem lies elsewhere..?
 
 cj
 
$2.00 * thousands of programmers - thousands of dollars + thousands of
lost training time; not to mention code conversion.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Chris Jones
On Fri, Feb 18, 2011 at 01:30:04AM EST, Westley Martínez wrote:
 On Thu, 2011-02-17 at 22:28 -0500, Chris Jones wrote:
  On Thu, Feb 17, 2011 at 09:55:47PM EST, Cor Gest wrote:
   Some entity, AKA Cthun cthun_...@qmail.net.au,
  
  [..]
  
And you omitted the #1 most serious objection to Xah's proposal,
rantingrick, which is that to implement it would require unrealistic
things such as replacing every 101-key keyboard with 10001-key
keyboards and training everyone to use them. Xah would have us all
replace our workstations with machines that resemble pipe organs,
rantingrick, or perhaps the cockpits of the three surviving Space
Shuttles. No doubt they'd be enormously expensive, as well as much
more difficult to learn to use, rantingrick.
  
   At least it should try to mimick a space-cadet keyboard, shouldn't it? 
  
  Implementation details, and not very accurate at that.. the APL keyboard
  has not additional keys and yet it has the potential to add up to 100
  additional symbols to the US-ASCII keyboard, half of which are produced
  via a single modifier.. same as upper-case letters. So unless more than
  50+20 = 70 symbols are needed the keyboard conversion would cost about..
  what.. $2.00 in stickers and maybe ten minutes to place them.
  
  Maybe the problem lies elsewhere..?
  
  cj
  

 $2.00 * thousands of programmers - thousands of dollars + thousands
 of lost training time; not to mention code conversion.

Yeah.. yeah.. one coffee break missed.. get real.. 

:-)

cj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems of Symbol Congestion in Computer Languages

2011-02-17 Thread Chris Jones
On Fri, Feb 18, 2011 at 12:43:37AM EST, John Nagle wrote:
 On 2/17/2011 6:55 PM, Cor Gest wrote:

[..]

 At least it should try to mimick a space-cadet keyboard, shouldn't
 it?

  I've used both the MIT Space Cadet keyboard on a Symbolics LISP
  machine, and the Stanford SAIL keyboard. There's something to be
  said for having more mathematical symbols.

Really..? Wow..! I only every saw pictures of the beast and I was never
really convinced it was for real.. :-)

 Some programs use a bigger character set.  MathCAD, for example,
 has a broader range of mathematical symbols on the input side than
 ASCII offers.  They're not decorative; MathCAD has different =
 symbols for assignment, algebraic equivalence, identity, and
 comparison.

Out of curiosity, I played a bit of APL lately, and I was amazed at how
quickly you get to learn the extra symbols and their location on the
keyboard. Had intergrating the basic concepts of the language been that
easy, I would have been comfortably coding within a couple of hours.
I was also rather enchanted by the fact that the coding closely matched
my intentions. No overloading in this respect. Not that I'm an APL
advocate, but who knows what programming languages will look like in the
not-so-distant future.

 I've previously mentioned that Python suffers in a few places
 from unwanted overloading.  Using + for concatenation of
 strings, then extending that to vectors, resulted in undesirable
 semantics.  + on arrays from numpy, and on built-in vectors
 behave quite differently.  A dedicated concatenation operator
 would have avoided that mess.

And the worst part of it is that you get so used to it that you take
such matters for granted. Thanks for the eye-opener.

 C++ has worse problems, because it uses  and  as both brackets
 and operators.  This does horrible things to the syntax.

.. from a quite different perspective it may be worth noting that
practically all programming languages (not to mention the attached
documentation) are based on the English language. And interestingly
enough, most any software of note appears to have come out of cultures
where English is either the native language, or where the native
language is either relatively close to English.. Northern Europe
mostly.. and not to some small extent, countries where English is
well-established as a universal second language, such as India. Always
struck me as odd that a country like Japan for instance, with all its
achievements in the industrial realm, never came up with one single
major piece of software.

cj
-- 
http://mail.python.org/mailman/listinfo/python-list


Problems of Symbol Congestion in Computer Languages

2011-02-16 Thread Xah Lee
might be interesting.

〈Problems of Symbol Congestion in Computer Languages (ASCII Jam;
Unicode; Fortress)〉
http://xahlee.org/comp/comp_lang_unicode.html

--
Problems of Symbol Congestion in Computer Languages (ASCII Jam;
Unicode; Fortress)

Xah Lee, 2011-02-05, 2011-02-15

Vast majority of computer languages use ASCII as its character set.
This means, it jams multitude of operators into about 20 symbols.
Often, a symbol has multiple meanings depending on contex. Also, a
sequence of chars are used as a single symbol as a workaround for lack
of symbols. Even for languages that use Unicode as its char set (e.g.
Java, XML), often still use the ~20 ASCII symbols for all its
operators. The only exceptions i know of are Mathematica, Fortress,
APL. This page gives some examples of problems created by symbol
congestion.

---
Symbol Congestion Workarounds


Multiple Meanings of a Symbol

Here are some common examples of a symbol that has multiple meanings
depending on context:

In Java, [ ] is a delimiter for array, also a delimiter for getting a
element of array, also as part of the syntax for declaring a array
type.

In Java and many other langs, ( ) is used for expression grouping,
also as delimiter for arguments of a function call, also as delimiters
for parameters of a function's declaration.

In Perl and many other langs, : is used as a separator in a ternary
expression e.g. (test ? yes : no), also as a namespace separator
(e.g. use Data::Dumper;).

In URL, / is used as path separators, but also as indicator of
protocol. e.g. http://example.org/comp/unicode.html

In Python and many others,  is used for “less than” boolean operator,
but also as a alignment flag in its “format” method, also as a
delimiter of named group in regex, and also as part of char in other
operators that are made of 2 chars, e.g.:  = = .


Examples of Multip-Char Operators

Here are some common examples of operators that are made of multiple
characters: ||  == = != ** =+ =* := ++ -- :: // /* (* …

---
Fortress & Unicode

The language designer Guy Steele recently gave a very interesting
talk. See: Guy Steele on Parallel Programing. In it, he showed code
snippets of his language Fortress, which freely uses Unicode as
operators.

For example, list delimiters are not the typical curly bracket {1,2,3}
or square bracket [1,2,3], but the unicode angle bracket ⟨1,2,3⟩.
(See: Matching Brackets in Unicode.) It also uses the circle plus ⊕ as
operator. (See: Math Symbols in Unicode.)

---
Problems of Symbol Congestion

I really appreciate such use of unicode. The tradition of sticking to
the 95 chars in ASCII of 1960s is extremely limiting. It creates
complex problems manifested in:

* String Escape mechanism (C's backslash \n, \/, …, widely
adopted.)
* Complex delimiters for strings. (Python's triple quotes and
perl's variable delimiters q() q[] q{} m//, and heredoc. (See: Strings
in Perl and Python ◇ Heredoc mechanism in PHP and Perl.)
* Crazy leaning toothpicks syndrome, especially bad in emacs
regex.
* Complexities in character representation (See: Emacs's Key
Notations Explained (/r, ^M, C-m, RET, return, M-, meta) ◇ HTML
entities problems. See: HTML Entities, Ampersand, Unicode, Semantics.)
* URL Percent Encoding problems and complexities: Javascript
Encode URL, Escape String

All these problems occur because we are jamming so many meanings into
about 20 symbols in ASCII.

See also:

* Computer Language Design: Strings Syntax
* HTML6: Your JSON and SXML Simplified

Most of today's languages do not support unicode in function or
variable names, so you can forget about using unicode in variable
names (e.g. α=3) or function names (e.g. “lambda” as “λ” or “function”
as “ƒ”), or defining your own operators (e.g. “⊕”).

However, there are a few languages i know that do support unicode in
function or variable names. Some of these allow you to define your own
operators. However, they may not allow unicode for the operator
symbol. See: Unicode Support in Ruby, Perl, Python, javascript, Java,
Emacs Lisp, Mathematica.

 Xah
-- 
http://mail.python.org/mailman/listinfo/python-list