The error may arise from a misunderstanding of the reference on the first
page of chapter 1 of the book to a 16-bit form and an 8-bit form and to
"using a 16-bit encoding." It's also hard to get one's head wrapped around
the idea that Unicode isn't just an encoding until one does extensive
reading on the website (or in the book).
Patrick Rourke
----- Original Message -----
From: <[EMAIL PROTECTED]>
To: "Unicode List" <[EMAIL PROTECTED]>
Sent: Tuesday, February 20, 2001 8:37 AM
Subject: Re: Perception that Unicode is 16-bit (was: Re: Surrogate space in
Unicode)
>
> On 02/19/2001 08:05:49 PM David Starner wrote:
>
> >With the Unicode-related functions in Prague growing out of size, I moved
> them
> >into a new library called 'Babylon'. It will provide all the
functionality
> >defined in the Unicode standard (it is not Unicode but ISO 10646
compliant
> as
> >it uses 32bit wide characters internally) and is written in C++.
>
> Eh? Unicode has no aversion to either a 32-bit encoding form (UTF-32 - see
> UTR#19 or PDUTR#27) or with C++.
>
>
>
> - Peter
>
>
> --------------------------------------------------------------------------
-
> Peter Constable
>
> Non-Roman Script Initiative, SIL International
> 7500 W. Camp Wisdom Rd., Dallas, TX 75236, USA
> Tel: +1 972 708 7485
> E-mail: <[EMAIL PROTECTED]>
>
>