In a message dated 2001-09-24 11:16:29 Pacific Daylight Time, 
[EMAIL PROTECTED] writes:
 
> For many application UTF-16 is a good compromise between a large code size
> and processing efficiency.  As this industry changes the decision points
> change.  Then there is always the great argument that many applications that
> were written for UCS-2 are much easier to convert to UTF-16.

This last argument is especially compelling when you consider the large 
number of people (programmers and others) who still think "Unicode" means 
"16-bit" and whose entire concept of supporting Unicode is using WCHARs and 
letting library string functions do the conversions automagically.

-Doug Ewell
 Fullerton, California

Reply via email to