> Well, there are no 'unclear goals': the general solution is *exactly*
> what I was talking about all the time, and what you need for any entry
> in the adjustment database: A mapping from input Unicode characters to
> glyph indices based on the GSUB + cmap tables and not on cmap alone.

Right now, the only cases where the GSUB table is helpful that I am aware
of, for the purposes of this project, are for handling glyph alternates and
combining characters.  Those would be one-to-one mappings and many-to-one
mappings, respectively.  Would this general solution involve other kinds of
GSUB mappings?  If so, it opens up edge cases such as: if a glyph on the
"many" side of a many-to-one mapping needs a vertical
separation adjustment, does the resulting glyph need it too?  This could be
answered quickly by looking at the specific characters involved, but how
would I answer this question in general?
Even sticking to just many-to-one and one-to-one mappings, the adjustment
database must make assumptions specific to the characters it's dealing
with.  In the case of combining characters, a separate database table is
required because the existing table is a list of unicode characters and the
actions that should be applied to them, while a glyph resulting from a
combining character might not be a unicode character.  Even if I assumed it
was, listing all characters possibly resulting from a combining character
is inefficient.  Instead, only a table with a few entries are needed: the
combining character's codepoint and what action should be applied.  This is
something I started on before this conversation, and this is an example of
how the use case affects the structure of the database.

Without knowing what future use cases should be easier to implement because
of a generic solution, I don't know what flavor of generic is required.

As for the tilde correction, I'll try doing it after the grid fitting like
you recommended.

Reply via email to