You need the metrics of the original fonts as well as basic style info
(serif/sans, mono/variable, weight, et cetera) and to create a mapping
from each glyph of each original face to a substitute glyph -- the map
can be a global, pre-computed const table.

When creating that map, you may find that a number of substitute glyphs
can come from a single scaling of a single substitute face, depending
on the precision to which you want to match.

You will need a box with access to each original font to create the map,
but once it is created it can be used without the original files.

When rendering the files -- or creating PDF/PostScript/et al from them --
you'll need an instance of each substitute font scaled suitably for each
glyph.  You will need to test to determine whether it is better to have
several faces so that each word can be set with a per-word matrix, or
to set each glyph with a per-glyph font+matrix.

Creating that mapping requires all of the metrics from each original
face and each possible substitute face and, once you have that data,
some simple statistics.

I suspect that optimizing that map will look familiar to those who
specialize in non-linear optimization/programming.

-JimC
-- 
James Cloos <[email protected]>         OpenPGP: 1024D/ED7DAEA6



_______________________________________________
Freetype mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/freetype

Reply via email to