Michael (michka) Kaplan wrote:
> Is not
> http://www.hclrss.demon.co.uk/unicode/braille_patterns.html
> or alternately
> http://charts.unicode.org/Web/U2800.html
> already covering this?

No. These are at most the building blocks for braille. A better parallel
would be to consider these "presentation glyphs" for braille. (But I think
that the main reason why these patterns are in Unicode is to encode runs of
braille-looking characters in didactic texts for *sighted* people).

AFAIK, braille conversion is a relatively complex thing, that involves using
"escape sequences" (e.g., for distinguishing case, or numbers from letters),
converting punctuation (e.g., braille has a single parenthesis for both
opening and closing), and even major spelling changes (e.g., Chinese uses a
phonetic script, English is strongly abbreviated, Japanese uses a single
kana series, Hebrew is left to right, etc.).

> It does not make sense to encode ever existing code point twice (which is
what
> I think you were implying in calling this topic fascinating? <g>).

No, no. I was not talking about making any change to Unicode. Blind people
on computers normally use standard character encoding; the conversion to
braille dots is done on-the-flight by the computer (just like the conversion
to colored pixels for sighted people).

The background of the question is that current braille software is capable
of dot-displaying many national text encodings, using the corresponding
national braille. So I was wondering if anybody is thinking to paste all
these local conventions together to represent Unicode (presumably, using
"escape sequences" when the script/language change).

(Probably it was just a silly question. F8-)

_ Marco

Reply via email to