I think Jill has a point.

Kenneth Whistler wote:

> Basically, thousands of implementations, for decades now,
> have been using ASCII 0x30..0x39, 0x41..0x46, 0x61..0x66 to
> implement hexadecimal numbers. That is also specified in
> more than a few programming language standards and other
> standards. Those characters map to Unicode U+0030..U+0039,
> U+0041..U+0046, U+0061..U+0066.

That's not a good reason for deciding to not implement something in
the future.
If everybody thought like that, there would never have been a
Unicode.

Besides, your example is proof that the implementation can change;
has to change. Where applications could use 8-bit characters to
store hex digits in the old days, they now have to use 16-bit
characters to keep up with Unicode...

and Jim Allen wrote:
> > What I mean is, it seems (to me) that there is a HUGE semantic
difference
> > between the hexadecimal digit thirteen, and the letter D.
>
> There is also a HUGE semantic difference between D meaning the
letter D
> and Roman numeral D meaning 500.

and those have different code points! So you're saying Jill is
right, right?

You seem to define "meaning" differently than what we're talking
about here.
In the abbreviation "mm" the two m's have different meanings: the
first is "milli" and the second is "meter". No one is asking to
encode those two letters with different codepoints!
What we're talking about is different general categories, different
numeric values and even, oddly enough, different BiDi categories.
Doesn't that qualify for creating new characters?

On a related note, can anybody tell me why U+212A Kelvin sign was
put in the Unicode character set?
I have never seen any acknowledgement of this symbol anywhere in the
real world. (That is, using U+212A instead of U+004B.)
And even the UCD calls it a letter rather than a symbol. I'd expect
if it was put in for completeness, to complement the degrees
Fahrenheit and degree Celcius, it would have had the same category
as those two?

Pim Blokland


Reply via email to