Jeremias Maerki wrote:

> Looks like you can't even be 100% sure that the base 14 fonts 
> are available.

Right.

> Let's look at it from another side. If someone writes some 
> kind of FO editor or a configuration tool for FOray/FOP a 
> method that reports all available fonts will certainly be useful. :-)

OK. That makes sense. To avoid wasteful parsing, it will mean that at least
3 new classes need to be exposed through interfaces (RegisteredFont,
RegisteredFontFamily, and RegisteredFontDesc), which may be a good thing
anyway.

> > Since I gather that FOP will not be supporting the reuse of a 
> > FontServer instance (i.e. each document will have its own 
> instance of 
> > FontServer), perhaps it works fine to just have the user provide a 
> > separate font-configuration file that contains only the 
> fonts needed 
> > for the document.
> 
> No, I think there will definitely come a point in time where 
> I will want some kind of object holding on to global font 
> configuration but which is not a static mechanism. Although 
> it's possible to reuse the FOUserAgent, the user agent IMO is 
> something that is bound to a rendering run. We simply haven't 
> finalized the API, yet, or I'm at least not ready to call it 
> finalized. :-)

Very good. It sounds like you and I may end up with API visions that match
better than I might have thought at one time.

> > Actually, you are no longer tied to WinAnsi. We have a lot more 
> > flexibility on encodings than before:
> > 1. All of the predefined encodings for both PostScript and PDF are 
> > available to either platform -- of course, if they are not 
> predefined 
> > for the platform used, they must be written into the output.
> > 2. Both platforms have access to the font's internal encoding.
> > 3. The user can specify custom encodings through the 
> > font-configuration file.
> > 
> > So, if a PostScript document can use the font's internal 
> encoding, and 
> > if the font is known to already be available to the interpreter, I 
> > think it could safely be used by name. But perhaps I have 
> forgotten something.
> 
> No, that's true. I simply haven't cared, yet, about finding 
> out how glyphs are accessed on-the-fly in PS that are not 
> accessible through the encoding. Rewriting the encoding seemed easier.

I am very sure that for Type 1 fonts, specifying another encoding is the
only way to get it done. There is just no way to get more than 256
combinations out of 8 bits and there is no way to get more than 8 bits.
However, the good news is that I am 99% sure that for both PDF and
PostScript you can specify the same underlying font with two (or more)
different encodings. They will actually show up as two different font
"objects" in the document and must of course be referred to that way also.
I'll let you know how that turns out.

> > This may require a new font-configuration item for the font element 
> > that allows it to tell whether it is known to be available to the 
> > PostScript interpreter. There are some other possibilities 
> here as well.
> 
> I bet. Sounds good.

The more I think about it, encapsulating the characteristics of a specific
PostScript interpreter is probably the "right" way to go. Then the rendering
run can use that to decide whether the font needs to be embedded or not.
I'll have to ponder that for awhile.

Victor Mote

Reply via email to