If you need to roundtrip 8859-1 through ASCII, you need to use some kind
of escape mechanism inside the ASCII to represent characters that have
their high bit equal to one. A common simple escape is to use the
backslash. So you could represent the codes as \'xx, where xx is the
hexadecimal code. F
cls raj wrote:
> We have a specific requirment of converting Latin -1 character set ( iso
> 8859-1 ) text to ASCII charactet set ( a set of only 128 characters).
8859-1 is a superset of ASCII (of US-ASCII, to be precise, but you seem to be using
that).
US-ASCII uses byte values 0..127 (7 bits),
Well, the good news is that ASCII is a proper subset of Latin-1. By that,
I mean that every ASCII character is also a Latin-1 character, with the
exact same bit encoding in an 8-bit byte (an "octet"). Of course, ASCII is
a 7-bit encoding (coded character set), but it is very frequently
repre
> We have a specific requirment of converting Latin -1 character set ( iso
> 8859-1 ) text to ASCII charactet set ( a set of only 128 characters). Is
> there any special set of utilities available or service providers who can do
> that type of job.
Well, if you only want exact character conve
> We have a specific requirment of converting Latin -1
> character set ( iso
> 8859-1 ) text to ASCII charactet set ( a set of only 128
> characters). Is
> there any special set of utilities available or service
> providers who can do
> that type of job.
[I am assuming that your "ascii" ta
We have a specific requirment of converting Latin -1 character set ( iso
8859-1 ) text to ASCII charactet set ( a set of only 128 characters). Is
there any special set of utilities available or service providers who can do
that type of job.
It is kind of critical for my current project, I wou
See:
XML Blueberry Requirements
W3C Working Draft 20 June 2001
http://www.w3.org/TR/xml-blueberry-req
| 1. Introduction
|
| The W3C's XML 1.0 Recommendation [XML] was first issued in 1998, and
| despite the issuance of many errata culminating in a Second Edition of
| 2001, has remained (
From: "Jianping Yang" <[EMAIL PROTECTED]>
> "Carl W. Brown" wrote:
> > If there are no surrogates in the database, is there any reason that I
can
> > not change the database from UTF8 to AL32UTF8?
>
> You can change the database from UTF8 to AL32UTF8 in this case. Also you
can
> use Oracle databa
At 10:50 AM 6/19/2001, Jianping Yang wrote:
>"Carl W. Brown" wrote:
>
> >
> >
> > If there are no surrogates in the database, is there any reason that I can
> > not change the database from UTF8 to AL32UTF8?
>
>You can change the database from UTF8 to AL32UTF8 in this case. Also you can
>use Oracl
Hello Geetika,
> I have to parse the UTF8 characters , so that they can accepted by
> the C++ code.
I wonder a little what you mean exacty by "parse" them so they can be
"accepted" by the code. In any case, if you want sample code for UTF
conversions, please try:
http://www.unicode
10 matches
Mail list logo