> CP850, but code 233 in ISO-8859-1. Lynx has code to convert this for > viewed documents in the chrtrans code, but doesn't convert the strings > brought in by the gettext/libintl code. That conversion is done via > libiconv. The original character set is read from the .po files when > they are converted by msgfmt, and subsequently used by libiconv when
If this is the crux of the issue, then I won't volunteer anything other than to say: My opinion is that it is the job of the person making the *.mo to make any and all necessary character set conversions. If it isn't or can't be done at that level then the person installing Lynx will have to deal with it. Since DOS is not a multi-user OS, it seems preferable to do the character set conversion at the *.po level, according to individual needs, and then compile the *.mo file. Done that way, libiconv is unnecessary. I strongly believe that it is not the job of the translator to have to worry about any of the technical issues beyond making a workable translation. It goes back to my post of a few days ago, and in essence echos what Leonid proposes (write up a "howto" and maybe include a sed/awk conversion script). I see no need to bother the developers of lynx or iconv over this issue. (My stand-alone iconv v1.8 accepts lower case on the command line, and afaik the charset declaration of the document is ignored when used with the "-f" and "-t" switches anyway. Also, iconv is not the only tool out there, and there may be better conversion tools than iconv for a particular language; I know there are for Japanese with which I am familiar.) People making *.mo files for a particular platform in a particular character set might offer these to the public if they wish by posting to lynx-dev the information on how to obtain it. __Henry ; To UNSUBSCRIBE: Send "unsubscribe lynx-dev" to [EMAIL PROTECTED]
