On Apr 23, 2007, at 10:52 AM, Arnaud Nicolet wrote:
> Le 23 avr. 07 à 18:43 Soir, [EMAIL PROTECTED] a écrit:
>
>> No, legacy encodings were defined all over the world.  Unicode was
>> defined by an international consortium.
>
> Thank you.
> I wonder, then, why those encodings also include the ASCII part.
> Should not the ASCII be an independent encoding?


There is a good overview of the evolution of ASCII and ASCII-based  
encodings at Wikipedia under "ASCII".

As I remember (and I am getting old, so take it with a grain of salt):

In the beginning, everyone who decided to build a machine (computer  
or terminal) created their own encodings.  And countries with other  
languages created their own encodings as fitted there needs.  Being  
old and from the U.S., the only other codes that immediately come to  
mind are EBCDIC and ANSEL.  But there were others.

As computers began to talk to each other, it became clear that if IBM  
used one encoding and DEC used another and Univac still another,  
things could get complicated.  A standards committee was called, and  
ASCII was born (in the U.S.).

Like it or not, with big businesses pushing and government  
incentives, the U.S. became the center of the computing world.   
Computing spread from the U.S. outward.  Since U.S. encodings had a  
standard, it was easy to use it as a base for other encodings in  
other countries, adding need characters in the unused (and non- 
existent) "high-ASCII" range.

The birth of microcomputers (as they were called back in the day),  
created more problems with ASCII.  Most agreed on ASCII as a base,  
but used the un-defined higher characters for whatever seemed  
appropriate to their intended audience.  "High-ASCII" got filled with  
space ships, smiley faces, greek letters, graphic borders, etc.  Word  
Perfect got very creative allowing you to switch between sets of  
extended ASCII encodings depending on your need.

But just as U.S. communications necessitated the creation of ASCII,  
world-wide communications and file sharing between platforms soon  
demanded standards.  Some advocated dumping all existing standards  
and creating a new universal standard -- a 16- or 32-bit encoding  
capable of representing all characters from all languages with room  
to grow.  In the end, it was decided to build on existing standards  
rather than replace them. And a scaleable encoding scheme was created  
allowing us to keep our good old 7-bit ASCII as a base and build on  
it.  Unicode was born.

Sadly Windows, Macintosh, and Postscript character sets were all  
created (independently) before the international standard.  So they  
can still give us problems.  Sometimes I wonder if they might have  
been right to dump everything and start fresh.  The transition would  
have been rocky, but encoding problems would by now be a thing of the  
past.

Kirk

-----------------------------------------------
REALbasic Professional 2007r1
MacBook Core Duo, Mac OS X 10.4.9



_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>

Search the archives:
<http://support.realsoftware.com/listarchives/lists.html>

Reply via email to