Re: Experiments with classical Greek keyboard input

2006-02-01 Thread Joe Schaffner
Hi Simos,

How you doing?

Hello to everybody else, Ed Trager, you still there?

I'm sure I'm forgetting some of you. Sorry.

It's me, Elvis, the guy who had problems with xkb last year. You were
all a great help. I could never have fixed the problems myself. My
system still works, better than ever..

Sorry, I have to add to this thread after all. The message I sent
yesterday did not end up in my inbox, so I can't even respond to
myself.

Here's what I said:
-
attached below
-

This would be my reply:

Funny thing happened to me today.

When I opened Mozilla I saw it was using the SuSE Free Serif again! I
don't know how that happened. It was using the Microsoft TNR. When I
go into Edit/Preferences I get a bewildering array of font selections,
and no positive feedback, so I couldn't get the TNR back.

Not to worry: it's still there, I can select it in Open Office.

But I decided to try out the character map program again, this time
with the SuSE font, and yes, it too supports polytonic Greek.

In fact, the curior monospace also does polygreek.

Unusual, the ones that don't.

That reminds me why I couldn't use the SuSE free serif: it's too fat.
I mean it takes up too much space, so I couldn't fit my dictionary
pages on the PDF I created in Open Office.

Did you know why Adobe calls it Portable Document Format?

It's because the PDF actually contains a subset of the font used to
create the document. The mini-font travels around the world with the
document, so your recipient sees exactly what you send, even if he
doesn't have the special font.

Speaking of exotic fonts, I like your Greek fonts, they look great.
But I don't understand why you call them Greek fonts, or why some
people call them Unicode fonts, for that matter.

Any font is a Greek font if it renders the Greek characters, and any
font should be usable with any characer set, as long as that internal
glyph index maps the character set to the glyphs.

So, do you think you can combine the mono and polytonic Greek
alphabets into a single character keymap?

Joe

PS

Here's what I said:

Hello.

I've been experimenting with polygreek too, but I hesitate to add to
your already established thread...

I took the Times New Roman ttf of a Windows XP system and installed it
on my SuSE 9.2 at home. To my surprise, I see this font supports
polygreek, so I tried setting a couple entries from a popular
dictionary of modern Greek:

http://modern-greek-verbs.tripod.com/home.html#unicode

With this font, I can capture the entire entry, no problems, pointing
fingers, arrows, boxes, tiny-elvises, polygreek etymology... There is
virtually nothing I cannot do with the Unicode character set alone.

I'm using the character map program to capture the data. I know the
Times font is working, because if I select another font, like the SuSE
free fonts, or even the Microsoft Arial, which I also ripped off, the
polygreek characters are not rendered.

I was wondering, since the font worked so unexpectedly well, maybe the
monogreek keymap would too.

But how would I know?

I gather from your correspondence that no polygreek keymap is
currently available, but I'm hoping the monogreek map might already do
something reasonable with poly greek.

True, the monogreek tonos is not the same as the polygreek accents,
but it should be possible to combine the two alphabets in a single
keymap, just like their part of the same font.

This would spare me tha agony of changing keymaps using the
what-ever-you-call-it, the xkb accelerator key. (Going from Greek to
English is already a pain in the ass.)

Would it be possible to extend the monogreek keymap to do polygreek?

You'd have one less module to distribute, and one less thing to install.

Getting back to the font:

The Linux Mozilla displays this document properly on my system at
home, but when I go to a MS system at the University, and use Internet
Explorer, the polygreek and some, but not all, of the special
characters are rendered by little boxes.

The Firefox on the XP system is a little better, all the glyphs
display, but not very nicely, at least not as nice as the Linux
Mozilla, which is perfect. There seems to be some kind of glyph
substitution going on.

I assume the font contains a table which maps the integer-valued
unicode character (which comes from the utf-8 byte stream) to a glyph
index inside the font. This table must be created somehow when the
font is designed, so I can't get at it, but I was wondering why the
same font, Microsoft Times New Roman, would behave differently in
different application programs, even if they are running on different
platforms.

Any guesses?

Thanks.

Joe

PS

I was very happy with the Font installation program which is part of
the KDE desktop. You just open the font directory with Konqueror and
click the Install button. Congratulations to whoever did it.

(Only I could not figure out how to install the fonts on Gnome. It's
probably just a matter of copying 

Re: question on Linux UTF8 support

2006-02-01 Thread Danilo Segan
Yesterday at 15:42, 問答無用 wrote:

 You can prevent just by only having UTF-8 locales on the machine.

GNU systems allow users to install their own locales wherever they
wish (even in $HOME) by setting environment variable LOCPATH (and
I18NPATH for locale source files themselves).

Basically, you want to ask of all your users to use UTF-8 as
filesystem encoding.

Cheers,
Danilo

--
Linux-UTF8:   i18n of Linux on all levels
Archive:  http://mail.nl.linux.org/linux-utf8/



Re: question on Linux UTF8 support

2006-02-01 Thread dsplat
I don't think that's a problem for a fresh install.  Are there any tools
for converting existing file systems from one encoding to another? 
That's a non-trivial problem.  Assuming that all of the characters in
the source encoding map to distinct characters in the target encoding
(let's assume for the moment that we're talking about ISO 8859-1 to
UTF-8), then all of the file names can be converted.  But here's the
list of things that must happen:

1) All of the file names must be converted from the source encoding to
the target encoding.

2) Any symbolic links must be converted such that they point to the
renamed file or directory.

3) Files that contain file or directory names will have to be converted.
 A couple of very obvious examples are /etc/passwd (for home
directories) and /etc/fstab for mount points).

It's step 3 that's going to be the problem.  While you can make a more
or less complete list of system files that would have to be converted,
each case wound have to be considered for whether it was safe to convert
the entire file or it was necessary to just convert file names.  There
is no way of identifying all of the scripts that might require
conversion.  And I don't want to think about going through each user's
.bashrc, .profile and .emacs looking for all of the other files they
load or run.

- Original Message -
From: Danilo Segan [EMAIL PROTECTED]
Date: Wednesday, February 1, 2006 1:58 pm
Subject: Re: question on Linux UTF8 support

 Basically, you want to ask of all your users to use UTF-8 as
 filesystem encoding.


--
Linux-UTF8:   i18n of Linux on all levels
Archive:  http://mail.nl.linux.org/linux-utf8/