On 8/14/2019 2:05 AM, James Kass via Unicode wrote:
This presumes that the premise of user communities feeling strongly about the unacceptable aspect of the variants is valid.  Since it has been reported and nothing seems to be happening, perhaps the casual users aren't terribly concerned.  It's also possible that the various user communities have already set up their systems to handle things acceptably by installing appropriate fonts.

This is always a good question.

Empirically, it has been observed that some distinctions that are claimed by users, standards developers or implementers were de-facto not honored by type developers (and users selecting fonts) as long as the native text doesn't contain minimal pairs.

For example, some Latin fonts drop the dot on the lowercase i for stylistic reasons (or designers use dotless i in highly designed texts, like book covers, logos, etc.). That's usually not a problem for ordinary users for monolingual texts in, say English; even though everyone agrees that the lowercase i is normally dotted, the absence isn't noticed by most, and tolerated even by those who do notice it.

However, as soon as a user community sees a particular variant as signalling their group identity, they will be very vocal about it - even, interestingly enough, in cases where de-facto use (e.g. via font selection, and not forced by implementation defaults) doesn't match that preference. As I said, we've seen this in the past for some features in some languages.

Now, which features become strongly identified with group identity is something that subject to change over time; this makes it impossible to guarantee both absolute stability and perfect compatibility; especially if a combining mark that is used in decompositions needs to disunified because the range of shapes changes from being stylistic to normative.

Before Unicode, with character sets limited to local use, you couldn't create minimal pairs (except if the variation was part of your language, like Turkish i with/without dot). So, if font deviated and pushed the stylistic envelope, the non-preferred form, if used, would still necessarily refer to the local character; there was no way it could mean anything else. With Unicode, that's changed, and instead of user communities treating this as a typographic issue (exclusive use of preferred font) which is decentralized to document authors (and perhaps font vendors) it becomes a character coding issue that is highly visible and centralized.

That in turn can lead to the issue becoming politicized; and not unlike some grammar issues, where the supposedly "correct" form is far from universally agreed on in practice.

A./

Reply via email to