https://issues.apache.org/bugzilla/show_bug.cgi?id=52477

--- Comment #6 from Mehdi Houshmand <med1...@gmail.com> 2012-01-18 10:51:13 UTC 
---
(In reply to comment #5)
</snip>
> An alternative approach that will also make it easier for applications to
> extract or de-duplicate font resources when merging multiple PDFs is to allow
> FOP to fully embed the font resources in the PDF, rather than creating a
> subset. I believe this is possible today for a limited use-case, by specifying
> encoding-mode="single-byte" on the font element within the fop.xconf file. I
> say "limited" because that only works if no characters outside the ASCII range
> are required.

That wouldn't necessarily fix the issue here. Fully embedding a font means that
the pseudo-unique prefix isn't used, however this isn't necessarily a good
thing. A parser like ghostscript, could and apparently does assume that if 2
fonts have the same name (prefix or not) that they are the same font. This is
an assumption  that I've made previously and has proved manifestly naive. Also,
any implementation CANNOT clash within the same document. Using a glyph subset
idea, there could be a scenario in which the 2 fonts with the same glyph
subsets produce the same prefix.

We have to be careful what we're supporting here. There is no standardised
method to identify a font, since anyone can call any font by any name. I don't
agree that making the prefix "more unique" (not sure there is a scale by which
something can be measured unique, it's binary, it is or it isn't), would help
here, because given time, inevitably you'll get a clash. Then what?

The prefixes are 6 chars long, the guys at Adobe made no indication that they
wanted it to be unique in a global sense, only within a document.

-- 
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.

Reply via email to