On 6 Apr 2017, at 16:05, Mark Davis ☕️ <m...@macchiato.com> wrote:

>> I just get frustrated when everyone including the veterans seems to forget 
>> every bit of precedent that we have for the useful encoding of characters.
> 
> ​Nobody's forgetting anything. ​Simply because people disagree with you 
> doesn't mean they are forgetful or stupid. One could just as well respond 
> that you are forgetting that Unicode is not a glyph standard. Merely because 
> a character have multiple shapes is not grounds for disunifying it.

The ignoring of reasonable precedent does not make the UTC seem reasonable. In 
terms of Deseret, the suggestion that characters 𐐅/𐐋/𐐃/𐐉 with a stroke derived 
from 𐐆 are glyph variants of one another simply makes no sense at all. We have 
honed over many years our understanding of writing systems, and saying “Oh, 
𐐉-with-stroke and 𐐃-with stroke are variant shapes of the same thing”… Anyone 
can see that this is not true. 

The vexing thing is that one can never rely on consistency in the UTC’s 
approaches to any proposal. I have discussed this with other successful and 
prolific proposal writers. It’s always a coin-toss as to how a proposal will be 
viewed. 

The recent instance of adding attested capital letters for ʂ and ʐ is a perfect 
example. We have seen before some desire to see evidence for casing pairs 
(though often it has not been sought.) We have never before seen evidence for 
casing pairs to be thrown out. Case, of course, is a function of the Latin 
script, just as it is of Greek and Cyrillic and Armenian and Cherokee and both 
Georgian scripts and others. The UTC’s refusal to encode attested capitals for 
ʂ and ʐ simply makes no sense. 

Your statement "Merely because a character have multiple shapes is not grounds 
for disunifying it” suggests an underlying view that "everything is already 
encoded and additions are disunifications”. I do not subscribe to this view. 

Michael Everson

Reply via email to