On 4/3/2017 5:42 AM, Michael Everson wrote:

Read to the end.
On 2 Apr 2017, at 19:43, Asmus Freytag <[email protected]> wrote:

It's a matter of perspective.

Higher-level semantic constructs are encoded in writing (or graphic notation), 
and you can see the individual marks, signs, letters and symbols as the element 
of this encoding. However, how strongly any of these marks, signs, letters and 
symbols are associated with a specific semantic, and how fixed that association 
is, depends on convention.
Asmus, I don’t follow this abstraction of yours. The proposal is simple. The 
proposal works when OpenType substitutions of “piece” plus “VS” are in the font 
and when an app can display such a substitution.
For example, "left arrow" has a very loose associating with a broad range of 
concepts that somehow relate to direction.
In contrast, "integral sign" is rarely associated with any concept outside 
calculus.
And chess piece characters are symbols which mean chess pieces.

It's tempting then, to assume that the character for "integral sign" somehow directly 
represents the semantic of "integration" --- except it doesn't.

The same indirection is at play here.
This is pure rhetoric, Asmus. It addresses the problem in no way.
Actually it does. I'm amazed that you don't see the connection.

My dislike for using variation sequences in the way Micheal appear to advocate 
is based on a different reason:
This is almost funny. Ordinarily I dislike variation sequences because I 
consider them pseudo-encoding.

the oft-stated fact that variation selectors may be ignored.
I’m aware of this. I may be wrong, but I believe you advocated for the encoding 
of variation sequences for mathematics purposes.

Yes, for those cases where the differences are known to not carry meaning, but where duplicating all fonts or duplicating the characters would have been the wrong solution to allow support for both conventions (e.g. upright vs. slanted integral signs, details of relational operator design, etc.).


If they are, any plain text that depends on the contrasting use of white and 
black chess background will become meaningless gibberish.
This is untrue. Did you not read the proposal? Look again at Figure 3. In the 
left hand column, the top example, which is only one of the several AsCII-based 
ways that chess fonts represent chessboards today (without any Unicode chess 
characters at all). It is legible only if that particular font is loaded. The 
middle example in the same column is not very good looking. But it is stable, 
parseable, exchangeable data which gives unique tokens for the empty squares in 
two colours and which contains the chess characters. It’s not “meaningless 
gibberish” and it’s not even very difficult to read. Same for the bottom 
example, which has been force-justified to facilitate legibility; while that 
font has visible glyphs for the variation selectors, it needn’t.

In these cases, explicit encoding would better cover what is desired: a 
reliable way to mark a distinction between different symbols (the two bishops 
are separate symbols, that also happen to express distinct, though related 
concepts -- it is not a single symbol with some ignorable attributes).
Well, Asmus, if by "explicit encoding” you mean “add more chess characters” 
this would require the trebling of the number of basic chess characters from 12 to 
36. You couldn’t get away with adding just six chesspieces-on-black because then 
fonts would be forced to draw all the chesspieces-on-white with the same em-square 
metrics needed to produce chessboards. But that would mean that nobody could use the 
ordinary chess pieces as just symbols in plain text (as seen in Figures 6 and 8). I 
do not believe that burdening chess users with having to use different fonts for 
in-text characters on the one hand and board-layout on the other is a good idea, 
particularly when both forms of presentation are the norm in chess-problem 
publishing.

Further, it would delay implementation of a chessboard solution till the summer 
of 2019 for no benefit, since the proposal here is simple to implement with 
nothing more than care on the part of the font designer.

And when in the past encoding pieces-on-black has been suggested, the answer 
has been: no, use a higher-level protocol.

This proposal is a robust and simple higher-level protocol. It enables the 
preparation of parseable chessboards without having to add characters, or 
without the problem of having pieces-for-use-in-text  looking nearly identical 
to pieces-for-use-on-white-squares.

Now, for the case of suggesting the chess-board cell dimensions, I do not have 
the same objection to the use of variation selectors. If the variation 
selectors get stripped, the text may require manual formatting to look correct, 
but it will still contain the correct symbols (and applying the chosen 
convention, you will be able to know which bishop is meant).

That's much closer to the way variation selectors are intended to be used.
What? You are very unclear here. Are you saying that the empty white and black 
squares should use VS but the chess pieces are not? That makes no sense to me 
at all.

I'm saying that perhaps it would be appropriate to select M-square glyph variants via a variation selector. That seems a clear-cut glyph *variation* to me. (If this variation is ignored, then the text looks bad, but in a way that is similar to selecting the wrong font - which is a rule-of-thumb way of evaluating whether variation selectors are appropriate).

The distinction between white/black background might be of a different nature. If you have arranged everything in a grid with the correct matrix, then the color of the background is perhaps redundant, given that there is a uniform convention for it.

If you assume the characters will ever be used outside a full grid, then that assumption fails and it will not be possible to restore the intended meaning if the variation selectors are missing. That's a warning flag, that they may not be appropriate for that use.

That's all.
A./

Michael Everson


Reply via email to