30.9.2016, 19:11, Leonardo Boiko wrote:

The Unicode codepoints are not intended as a place to store
typographically variant glyphs (much like the Unicode "italic"
characters aren't designed as a way of encoding italic faces).

There is no disagreement on this. What I was pointing at was that when using rich text or markup, it is complicated or impossible to have typographically correct glyphs used (even when they exist), whereas the use of Unicode codepoints for subscript or superscript characters may do that in a much simpler way.

The
correct thing here is that the markup and the font-rendering systems
*should* automatically work together to choose the proper face—as they
already do with italics or optical sizes, and as they should do with
true small-caps etc.

While waiting for this, we may need for interim solutions (for a few decades, for example). By the way, font-rendering systems don’t even do italics the right way in all cases. They may silently use “fake italics” (algorithmically slanted letters). (I’m not suggesting the use of Unicode codepoints to deal with this.)

I agree that our current systems are typographically atrocious and an
abomination before the God of good taste, and I don't blame anyone for
resorting to Unicode tricks to work around that.

I don’t think it’s a trick to use characters like SUPERSCRIPT TWO and SUPERSCRIPT THREE. The practical problem is that at the point where you need other superscripts that cannot be (reliably) produced using similar codepoints, you will need to consider replacing SUPERSCRIPT TWO and SUPERSCRIPT THREE by DIGIT TWO and DIGIT THREE with suitable markup or formatting, to avoid stylistic mismatch. This isn’t as serious as it sounds. When that day comes, you can probably do a suitable global replace operation on your texts.

Yucca

Reply via email to