Re: [Fonts] Problem of Xft2

2003-08-08 Thread Pablo Saratxaga
Kaixo!

On Fri, Aug 08, 2003 at 06:59:43PM +0900, Chisato Yamauchi wrote:

   But Gtk2 has not complete font-substitution mechanism.
 Therefore, Gtk2 is insufficient in CJK environment.

GTk2, using pango, has builtin fontset mechanism.
(it is always enabled, and automatically build, depending on language
and language coverage of available fonts).

 So I *NEVER* use Gtk2-mozilla.  It has no flexibility of a 
 font setting.

Mozilla doesn't use Gtk2/pango text rendering mechanisms to render
html pages.
So, you cannot judge the font abilities of Gtk2 toolkit with mozilla.

   The right and wrong of a toolkit become clear when using 
 Xft2.  For me, Qt is the only choice when using Xft2. So I do

I feel exactly the opposite: as Qt doesn't have automatic fontset mechanism,
I very often end with characters displayed as empty white squares, giving
unreadable text.
Gtk may choose automatically a font that looks funny, but at least a character
is always displayed in a readable way, I prefer it that way.

That being said, it would be nice to have the ability to do user-configuration
of glyph substitutions in gtk2; eg telling that when a given font  is
choosen, then characters of range 0x00-0xff should be ignored, and taken
from font  instead. The ascii range of some CJK fonts is simply 
too ugly... or even bugged in some cases. 


-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.walon.org/pablo/  PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]


pgp0.pgp
Description: PGP signature


Re: [Fonts] Re: mkfontscale and family names which contain '-'

2003-02-06 Thread Pablo Saratxaga
Kaixo!

On Thu, Feb 06, 2003 at 05:55:37PM +0100, Mike FABIAN wrote:

  In the ttmkfdir version we use we handle such cases by using the postscript
  name of the font instead, when writting the fonts.dir file; I haven't had
  any problem so far (several years already doing that).
 
 Sometimes the PostScript name can also have '-' characters:
 
 mfabian@magellan:~$ ftdump /usr/X11R6/lib/X11/fonts/truetype/kochi-mincho.ttf | grep 
postscript
postscript: Kochi-Mincho

Mmh, indeed.

Note however that the normal name has not a - but a space; 
so the postscritp name isn't used by my version of ttmkfdir as the
default name is ok to use.

Also, it seems that the thing after the '-' in ps font names is to specify
a different style; Kochi-Mincho and Kochi-Gothic, MS-Mincho and
MS-Gothic are the 4 only exceptions in all the fonts I have.

Ok, I looked at the sources, and what our version of ttmkfdir does is:

- look at the the English name of the font, for the 'Macintosh' platform,
- if not ok, look at the English name for the 'Windows' platform,
- if not ok, take the psname
- if still not ok, use unknown (it could be improved, but until now
  this case has never been reached).

IIRC the reason I changed the default algorithm is that it didn't used
the psname if the font name was wrong, but instead changed the chars
of the string to valid ascii, and in some cases the result was very 
unreadable, while using the psname would ahve given a much better name.

Well, all this is becoming obsolescent anyway (and rightly so! font
handling trough Xft2 is so much easier).

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.walon.org/pablo/  PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg01483/pgp0.pgp
Description: PGP signature


Re: [Fonts] Unsafe chars in Mkfontscale

2003-02-06 Thread Pablo Saratxaga
Kaixo!

On Thu, Feb 06, 2003 at 05:41:03PM +0100, Juliusz Chroboczek wrote:
 Mike,
 
 Would you be so kind as to test the attached patch and confirm that it
 does what you want?  It's rather urgent, I'd like it to go into 4.3.

I think using '_' instead of ' ' for the unsafe chars would be better.

Also, for [ ] ( ) \ I told about them as examples, I don't know if they
are actually problematic, nor if there are others in such case.
(also, what about '  ` )

 
 I'm cut off from CVS right now, sorry if it doesn't apply cleanly.
 
 Juliusz
 
 


-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.walon.org/pablo/  PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg01484/pgp0.pgp
Description: PGP signature


Re: [Fonts]FreeType bug report

2002-08-20 Thread Pablo Saratxaga

Kaixo!

On Tue, Aug 20, 2002 at 05:44:13PM +0400, Vadim Plessky wrote:

 |  You should also decide on an extenson name other than .ttf, to avoid
 |  that those bitmap only ttf files get confused wwith real scalable
 |  fonts by people out there, otherwise there would be a lot of bad
 |  consequences.
 
 It seems to me that .ttf extension is o.k. for such fonts.

I disagree.
Or have you tested with all programs that use TTF fonts directly, and
tested also in other operating systems (Windows, MacOS, BeOS,...) and
other graphical environments (like Berlin) that those fonts will work
and won't break anything ?

I'm afraid that a vast majority of programs and OS currently using TTF
simple expect them to always have scalable glyphs; what will happen
if one of such programs tries to use a bitmap only font for displaying
at a size for xhich there are no bitmaps embedded ?  

 But indeed Qt3/KDE3 and GNOME2/GTK2 should be patched/tested against such 
 fonts.

There are a lot of utilities out there that use directly TTFs; from
little utilities creating images for web counters, to programs doing
3D rendering of text,... and don't forget also other non-X11 environments;
very bad press will happen if fonts are disseminated that cause problems
(and they probably will be disseminated if people think they are just
normal TTF fonts).

So, using a different extension name will solve a lot of trouble.
 
-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg01080/pgp0.pgp
Description: PGP signature


Re: [Fonts]Can't input Lao

2002-07-22 Thread Pablo Saratxaga

Kaixo!

On Mon, Jul 22, 2002 at 06:41:42PM -0400, Anthony Souphavanh wrote:
 Pablo,
 
 I searched on the net and found this article which this person mentioned
 you. I was wondering whether I need to set or use UTF-locale.

Yes, you need a locale using an encoding that incldes lao.
That means utf-8.

 If this true,
 why it works for Thai and Hebrew, Greek, etc.

It's the same for those.
However, they have a second choice in addition to utf-8; respectively
tis-620, iso-8859-8, iso-8859-7,...
But it doesn't make sense to add other non utf-8 encodings nowadays.

 When I issued the command 'locale charmap' the locale is  ISO-8859-1.

it won't work then (unless your program uses the locale independent functions
to get input keyboard, but they are relativley new and I suppose not
much programs use them currently).

change to an UTF-8 locale.
 
 By the way,  did you get a chance to test lo keysyms I sent?

It works perfectly for me (well, as far as I can tell, I don't know Lao);
I tested it with yudit, using xkb input with en_US.UTF-8 locale

 create directory /usr/X11R6/lib/X11/locale/en_GB.UTF-8 copy XLC_LOCALE from
 /usr/X11R6/lib/X11/locale/en_US.UTF-8 into this directory.

If you will to use the same file it is not needed to copy it under another
name.

 If you want to input non-ascii characters, you may need a compose map.

that is for composing latin letters with accents and things like that,
it is not needed at all for Thai or Lao (unless for things like (c), tm,
and such signs, and the myriad of exisitng quotation marks etc)

 Unfortunately, compose maps provided with xfree86 are somewhat
 insufficient.
 
 See http://www.xfree86.org/pipermail/i18n/2001-August/002278.html
 
 Download the Compose.gz file (by Pablo Saratxaga), gunzip and place it in
 /usr/X11R6/lib/X11/locale/en_GB.UTF-8

That file (with my errors corrected) is now shipped in standard with
XFree86; so if your version of XFree86 is not too old you don't need
to do anything (and anyway, it is not needed for Lao; it is needed for
French, German etc that requrie composing for typing their accents, but
Lao just encodes the diacritics separately)

 Edit file /usr/X11R6/lib/X11/locale/compose.dir and add there these lines:
 (is this really necessary?)
 
 en_GB.UTF-8/Compose:en_GB.UTF-8
 en_GB.UTF-8/Compose en_GB.UTF-8

No; XFree86 already comes with a good Compose file (in en_US.UTF-8 directory)
and has en_GB.UTF-8 locale point to it (in fact all UTF-8 locales minus
the CJK ones)
 
 Also download us_intl.gz file, gunzip it and put it into
 /etc/X11/xkb/symbols (this will give you much better keyboard for writing
 various accented latin letters). Notice that version by Pablo has some
 minor typos in it, these are corrected in my file.

That file is only useful for typing latin letters like ubreve, eogonek
or kcedilla etc.
you don't need it for Lao.

you only need an UTF-8 locale actually installed i nyour system (it doesn't
matter which locale you choose as long as it uses UTF-8.
For Lao it should be lo_LA.UTF-8; it is known by XFree86, but probably
it isn't by your libc, so you may need to choose another.

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg00997/pgp0.pgp
Description: PGP signature


Re: [Fonts]Can't input Lao

2002-07-19 Thread Pablo Saratxaga

Kaixo!

On Thu, Jul 18, 2002 at 11:09:11PM -0400, Anthony Souphavanh wrote:
 
 Hi guys,
 
 It wasn't success in try to input Lao in any of the KDE applications. I was
 
 1. Created Lao keyboard layout and put it in .../xkb/symbols and named
 named lo for Lao language

Is it accessible somewhere, so we can test it under other configurations ?

 I also installed BDF fonts from Mark Kuhn's website which included lao.

Maybe if your KDE is configured to use antialiasing it will ignore BDF fonts?

You may look after Code2000 TTF font, it has a quite complete unicode set
and is TTF format, so it may be a good one to test.
it's shareware.

Alos, when you say that you cannot type; what does it mean ? Nothing happens,
as if you hitted an undefined key, or  blank boxes appear ?

Thanks

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg00988/pgp0.pgp
Description: PGP signature


Re: [Fonts]Font family name problem

2002-07-18 Thread Pablo Saratxaga

Kaixo!

On Wed, Jul 17, 2002 at 06:26:47PM -0700, Keith Packard wrote:
 
 Ok, I'm adding localized names to fontconfig.  To allow applications to 
 continue working with a single name, I'm actually listing the localized 
 names in the FC_FAMILY value using a new datatype (FcTypeLangString).  
 Ask for a string, and you'll get back just the family name, but ask for a 
 LangString and you'll get both.

What you mlean by both ?
Users should see only one name for each font; but that name should be,
if exists, the localized one.
Then, for programs internally, a unique name (the ascii-only one) be used.

In other terms, the localized names are used to display in the lists
shown to the user, when when one of those is choosen, what is returned
is not the localized name but the ascii-only one.
And inversly, when an ascii-only name is gotten, it should be possible
to retrieve its associated localized name if needed.

The localized lists shoumld be a list of 2-tuples: (ascii-name, localized-name)
one used for display, and the other used for the return value in case it
is chosen.
 
 A couple of questions:
 
  2)   Should I use the current locale to select a family from those
   listed when an application requests a string for FC_FAMILY?

yes.
(well, it may be interesting to have a possibility to explicitelly tell
a given language; but in the general case the current locale should be
used, it should follow the same rules as for translations of programs
interfaces and menus imho).

   That would mean returning different values depending on the
   locale.

Of course, that is the purpose.
And that is why a unique ascii-only value should always be associated
with them, so programs can use that unique and locale-independent value;
and users see the locale dependent value.

It should be seen and handled in a way similar to the localization of 
programs interfaces; with the difference that translations are not handled
trought gettext bu embedded in the fonts and should be handled by a
specific api.


-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg00980/pgp0.pgp
Description: PGP signature


Re: [Fonts]Font family name problem

2002-07-18 Thread Pablo Saratxaga

Kaixo!

On Thu, Jul 18, 2002 at 10:12:05AM -0700, Keith Packard wrote:
 
 Around 13 o'clock on Jul 18, Pablo Saratxaga wrote:
 
  In other terms, the localized names are used to display in the lists shown
  to the user, when when one of those is choosen, what is returned is not the
  localized name but the ascii-only one. And inversly, when an ascii-only
  name is gotten, it should be possible to retrieve its associated localized
  name if needed.
 
 Why do you believe the internal interface should only use ASCII names?  

Not necessarly ASCII names, but unique and locale independent ones.
So if a document which embedds fotn names is open under a different locale
nothing strange happens.

 Note that *all* of these names are locale independent; applications can 
 use any of the names to access the font, the only question is what name 
 should be returned when the application requests it, the mapping from the 
 set of names to a name appropriate for the user is the only locale
 -dependent step.

Ok then, so it's ok.

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg00984/pgp0.pgp
Description: PGP signature


Re: [Fonts]Font family name problem

2002-07-13 Thread Pablo Saratxaga

Kaixo!

On Sat, Jul 13, 2002 at 10:40:51AM -0700, Keith Packard wrote:
 
 I'd be interested to hear whether other people will find this scheme 
 usable, and whether people would also like to see the other localized 
 names made available for presentation to the user.

Yes, both localized names and ascii-only names are usefull.
Maybe the same api could be used for both cases, trough a paremeter
telling which localized language is requested (if C, then
ascii-only names are given).

The availability of localized names will make users feel more comfortable,
as other systems will display those names to them.

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg00975/pgp0.pgp
Description: PGP signature


Re: [Fonts]Re: [I18n]language tags in fontconfig

2002-07-07 Thread Pablo Saratxaga

Kaixo!

On Sat, Jul 06, 2002 at 03:33:40AM -0700, Keith Packard wrote:
 
 I don't know why all of the latin languages include  and ', it's 
 probably just a mistake; they're easily removed.

For the '' I agree; but the apostrophe may be very important for
some languages (eg: French, English)

 The reason I haven't included the Euro is that this would disable the use
 of any Latin-1 fonts.

Also, monetary symbols could be taken from another font without too much
problem; and they are also quite irrelevant ot language (You can very well
put an amount in euros in a Chinese text, and an ammont in dollars in
an italian text...)

 I'm also uncomfortable about dropping requirements for numerals;
 they are more like letters than punctuation.
 
 The question is whether you'd want to skip a font just because it didn't 
 support the Basic Latin digits.  Applications that I'm writing now (Pango, 
 Mozilla and Tcl/Tk) will failover to another font for missing glyphs.

I think for latin based languages the numerals should always be there
(as well as the basic ascii set).
But for non-latin languages, the whoile ascii set (including the numerals)
may be missing from the font; so, for those non-latin languages, the
presence of the numerals can be skipped.

 I will note that my current Arabic table is missing the Arabic numerals,
 that seems wrong to me.

In fact the practice to use western-arabic digits, eastern-arabic digits,
or ascii-style digits vary from country to country; maybe even depending
on the context (eg: inside a text using arabic shapes, but a document
mostly numeric, like a spreadsheet using ascii-style ones)
 
-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]



msg00920/pgp0.pgp
Description: PGP signature


Re: [Fonts]Automatic 'lang' determination

2002-06-30 Thread Pablo Saratxaga

Kaixo!

On Sat, Jun 29, 2002 at 05:17:04PM -0700, Keith Packard wrote:
 
  What are those glyphs? (I'm quite surprised, I would have expected the
  opposite: fonts generally have more glyphs than the standard encodings of
  the sio-8859 family for example)
 
 My definition of language tag is coloured by the OS/2 table codePageRange 
 bits from which is was originally defined in fontconfig.  Those bits are 
 defined to map to specific Windows code pages; the Latin-1 case doesn't 
 map to ISO 8859-1, but rather to code page 1252 for which many fonts are 
 missing a few random entries.

But what characters are those?
It is possible that they are the onesthat have been added to cp1252
and that didn't existed some years ago?
I think the matching should be done against the lowest denominator
and be strict; or to give different weights to the miss of *letters*
or other symbols (it may be more or less acceptable to get quotation
marks from another font; bUt lEttErs frOm A dIffErEnt fOnts Is vErY UglY).

  No, the tolerance for missing glyphs in CJK tests should be the same or
  even smaller. The difference is that it isn't needed to test all the glyphs
  for CJK coverages; testing only a set of 256 choose glyphs would be enough
  (if they are correctly choosen, testing that 256 glyphs are present in a
  font is enough to assure, with 99.99% of confidence, that it covers a given
  CJK language).
 
 I'm not confident enough of this approach; I fear that any set of 256 
 glyphs that must appear in a simplified Chinese font may well appear in 
 many traditional Chinese (or even Japanese) fonts.

Most do, of course, but there are a lot that don't.
I only dealt with a ~10-15 ttf CJK fonts, but never had false positives
using that method.

 out there that doesn't encode all the characters of gb2312?
 
 It seems that this must be the case -- I set the '500' number so high 
 because all of the fonts which I have that advertise support for 
 simplified Chinese are missing over 200 glyphs from GB2312.  I got
 similar results for Japanese fonts, Korean Wansung fonts and traditional 
 Chinese fonts.

But what characters are those missing?
Could it be that those are semi-graphic ones, or scripts used by other
languages (eg: cyrillic, greek, japanese kana in chinese font, etc).
Here too, different weights should be used, it is not a big problem if
a CJK font is missing cyrillic, a font designed for russian will be a much
better choice to render cyrillic anyway; but it may be a big problem if
some needed characters are missing.

And I'm really surprised by such a high number as 200.
Are you sure you tested against gb2312 and not agains the Microsoft
codepage based on it (that surely adds several extra characters) ?

 But to handle such case, I think it would be better to choose a given
 definition of big5 (or several of them) and stick to it, rather than
 allowing a so tremendously big hole as 500 possible missing chars.
 
 Missing 500 from a repertoire of nearly 2 doesn't seem to render most 
 of these fonts unusable.

It could, it depends on what glyphs are missing.


-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]
___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts



Re: [Fonts]Automatic 'lang' determination

2002-06-29 Thread Pablo Saratxaga

Kaixo!

On Sat, Jun 29, 2002 at 09:34:43AM -0700, Keith Packard wrote:
 
 This goal is reflected in the design I outlined -- fonts are deemed 
 suitable for a particular language when they cover a significant 
 fraction of the codepoints commonly associated with that language.

That is inacceptable.
A font is suited for a given language when it covers *ALL* of the codepoints
needed for that language.

The only exception in checking *all* of the needed codepoints is that
of CJK languages, that is because:
- there is a very small set of such languages
- the fonts are designed with coverage of one of them in mind
- the mandatory glyphs needed for a given CJK language that don't
  overlap with any other CJK language make a quit big set, allowing
  to test just a carefully chose and small set of glyphs, and assume
  that all other glyphs needed for a given CJK language are present too.

Maybe also scripts used for one and only one language can be handled
withotu the need to check all the needed codepoints (but on the other hand
they always form a small amount of codepoints, so checking them all is
not a problem)

But for the big majority of languages, that are not the only ones written 
with a given script, just checking coverage of a signifiant fraction
is not enough.

For example Spanish, it needs the a-z letters plus áéíóúüñ (that is, aacute,
eacute, iacute, oacute, uacute, udiaeresis and ntilde).
If only one of these is missing then you cannot render a Spanish text
correctly, even if out of the 66 chars (33 lowercase, 33 upercase) the
font covers 65 of them, it is still not suitable to properly render
Spanish text (it may get unnoticed if the text just happens to don't
use the missing letter, but relying in chance is not very serious)

So, the tests for CJK languages and for other languages are clearly different,
only CJK languages can go with testing only a signifiant fraction,
for all other languages all chars must be tested.
 
  Suppose there's a document tagged as zh_TW that explains how PRC government
  simplified Chinese characters to boost the literacy rate after WW II. If a
  Big5 font (that doesn't cover all characters in the doc) is selected
  instead of a GBK/GB18030 font (with the full coverage), simplified Han
  characters(not used in Taiwan but only used in PRC) in the doc have to be
  rendered with another font (most likely GB2312/GBK/GB18030 font).
 
 A correct version of this document would tag individual sections of the
 document with appropriate tags.  This way, the zh_TW sections could be
 presented in a traditional Chinese font while the mainland portions are
 displayed with simplified Chinese glyphs.

Indeed.

I wonder however how place names are handled. Are there place names with
names using hanzi that don't exist in simplified form ?
If so, what would be the preferred solution to write such a place name
in a simplified Chinese text ?
Same question for people names.

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]
___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts



Re: [Fonts]Automatic 'lang' determination

2002-06-29 Thread Pablo Saratxaga

Kaixo!

On Sat, Jun 29, 2002 at 01:20:34PM -0700, Keith Packard wrote:
 
  A font is suited for a given language when it covers *ALL* of the codepoints
  needed for that language.
 
 Yes, that's obviously true, but the problem is that I don't have tables for
 each language indicating the required codepoints, all I have are tables
 listing Unicode values in encodings traditionally used for each language.
 These tables almost always include a few (1-5) glyphs which many fonts are
 missing.

What are those glyphs?
(I'm quite surprised, I would have expected the opposite: fonts generally
have more glyphs than the standard encodings of the sio-8859 family
for example)

 So, the tests for CJK languages and for other languages are clearly different,
 only CJK languages can go with testing only a signifiant fraction,
 for all other languages all chars must be tested.
 
 Yes, the tolerance value given for the Han languages is 500 codepoints 
 while the value for non-Han languages is two orders of magnitude smaller.

No, the tolerance for missing glyphs in CJK tests should be the
same or even smaller.
The difference is that it isn't needed to test all the glyphs for CJK
coverages; testing only a set of 256 choose glyphs would be enough
(if they are correctly choosen, testing that 256 glyphs are present in
a font is enough to assure, with 99.99% of confidence, that it covers
a given CJK language).

That cannot be done for the 8bit latin/cyrillic encodings because
there is too much overlapping between them (in the case of
iso-8859-1/iso-8859-15 the overlapping is of 97% for example).
While there is also a lot of overlapping between CJK encodings, there
are large plages of non overlaping chars, chars that appear only in
the japanese encoding, or only in gb2312, or only in big5 etc. (I mean
by only: not in any other widely used legacy encoding, so explicitely
excluding unicode that of course includes them all). As those exclusive
chars are numerous enough it is possbile to test for the presence of
some of them in a font and determine a language coverage from there.

Of course, complete checking can also be done, but I wonder if it is
actually useful (I mean, is there a font suitable for simplified chinese
out there that doesn't encode all the characters of gb2312? It would be   
like a font for English that is missing the r letter).
Big5 is a bit more problematic, as there is no such a thing as a well
defined Big5 encoding, but rather, in the pure Microsoftian tradition
(big5 comes after all from that side) a number of revisions all named
the same, that adds some characters, and an older font can miss some
chars that a newer one has (according to a newer definition of big5). 

But to handle such case, I think it would be better to choose a given
definition of big5 (or several of them) and stick to it, rather than
allowing a so tremendously big hole as 500 possible missing chars.

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://chanae.stben.be/pablo/   PGP Key available, key ID: 0xD9B85466
[you can write me in Walloon, Spanish, French, English, Italian or Portuguese]
___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts



[Fonts]default encoding for postscript fonts

2002-02-20 Thread Pablo Saratxaga

Kaixo!

When a TTF font is requested trough an unknown encoding (eg: *-iso8859-0 )
the default seems to be the same as microsoft-win3.1, which makes sense
as almost all non unicode TTF fonts have a sensible table for it.

Now, for postscript fonts, wouldn't it be better to have the default
as being the same as -adobe-fontspecific instead of -iso8859-1 ? 

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://www.srtxg.easynet.be/PGP Key available, key ID: 0x8F0E4975

___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts



Re: [Fonts]default encoding for postscript fonts

2002-02-20 Thread Pablo Saratxaga

Kaixo!

On Wed, Feb 20, 2002 at 02:26:18PM +, Juliusz Chroboczek wrote:
 PS When a TTF font is requested trough an unknown encoding (eg:
 PS *-iso8859-0 ) the default seems to be the same as
 PS microsoft-win3.1,
 
 No, the default is ``iso8859-1''.

I stand corrected.

 PS Now, for postscript fonts, wouldn't it be better to have the default
 PS as being the same as -adobe-fontspecific instead of -iso8859-1 ? 
 
 Currently, all the fontenc-using scalable backends (type1, speedo,
 freetype) fall back to ``iso8859-1''.  I have no a priori objection to
 your suggestion; can you convince me that it is useful enough to break
 the uniform behaviour of all the backends?

I think defaulting to respectively microsoft-win3.1 for TTF and
adobe-fontspecific for postscript will allow transparent use of
any exotic encoding.

Of course if it takes too much effort to do it just forget about it;
but if it is simple to do I think those defaults are better, because they
*are* the defaults used by a lot of fonts out there.


-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://www.srtxg.easynet.be/PGP Key available, key ID: 0x8F0E4975

___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts



Re: [Fonts]FIRSTINDEX token found in large endoding files

2002-02-06 Thread Pablo Saratxaga

Kaixo!

On Wed, Feb 06, 2002 at 02:08:32AM -0500, Mike A. Harris wrote:
 When running ttmkfdir (the C++ version) with Korean and many 
 other fonts, it chokes horribly with the following errors:
 
 pts/25 root@devel:/usr/share/fonts/ko/TrueType# ttmkfdir -o fonts.scale
 unexpected token FIRSTINDEX in file 
/usr/X11R6/lib/X11/fonts/encodings/large/ksc5601.1992-3.enc.gz, 
 line 8

 After a fair bit of investigation, I downloaded the Debian 
 ttmkfdir and the Mandrake one as well.  Those ttmkfdir's dont 
 produce the correct results either but at least don't SEGV on 
 korean fonts.

The one used at Mandrake don't use the *.enc files, it has its own
compiled in tables and tests the presence of given values in the ttf
file itself. It's just the old version of ttmkfdir with some more
tables.
For large fonts, it just test some ~200 values, hopefully some caracteristic
ones for the different CJK encodings.
We also added a -u command line switch to force the output of *-iso10646-1
lines for all fonts.

What result did you expected that it wasn't correct?
 
 I'm wondering what the intention of adding FIRSTINDEX to the 
 encoding files is, as it prevent's ttmkfdir from working 

IIRC it is to have the XLFD fonts made out of the ttf look the same
as the bitmap ones, that is, starting at the right place.
Otherwise they start at index 0x and have white glyphs (not missing
or empty but white, like in a white space) until the first real value
(something like 0x2020 IIRC)).
It may be annoying in some cases, viewing the fonts with xfd being one
of them.

 My current short term solution in mind is to remove the 
 FIRSTINDEX entries from the encoding files so that things work 
 for now as they always have, and in the mean time see if there 
 are patches floating around for ttmkfdir to handle this for the 
 longer term.

If FIRSTINDEX could be ignored that way, it should be easier to make ttmkfdir
ignore it.

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://www.srtxg.easynet.be/PGP Key available, key ID: 0x8F0E4975

___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts



Re: [Fonts]encodings.dir files obsolete?

2002-02-01 Thread Pablo Saratxaga

Kaixo!

On Fri, Feb 01, 2002 at 04:43:09AM -0500, Mike A. Harris wrote:

 I'd like to clarify an assumption with encodings.dir files.  If I 
 understand correctly, the encodings.dir files are no longer 
 required, having been replaced now by the encodings and 
 encodings/large subdirectories.
 
 I read something earlier to the effect that encodings.dir is not
 needed, but is still supported if the files happen to be there.
 
 Is there any useful purpose to me shipping these files?  

I don't know for you.
But the ability to have a local encodings.dir to override the default one
is useful to handle some wrong ttf fonts that wrongly claim to be in cp1251
i nther unicode table, while they are in fact in another completly different
encoding.

On the other side, if you ship only fonts with correct unicode tables
there is no need for the encodings.dir

-- 
Ki ça vos våye bén,
Pablo Saratxaga

http://www.srtxg.easynet.be/PGP Key available, key ID: 0x8F0E4975

___
Fonts mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/fonts