There were also papers on the subject at past unicode conferences.
Look for one by Martin Duerst several years ago and one by Kat Momoi, Netscape
only a few years back.
I think both are on the web.

Also look at the Netscape open source code. I believe it does some detection.

However, accuracy can be greatly improved if you or the end-user can supply
some information about the likely nature of the data (language, platform, most
likely encoding possibilities, file formats, data format or content information
e.g. field of expertise, etc.)

tex



"D. Starner" wrote:
> 
> > Given any sizeable chunk of text, it ought to be possible to estimate
> > the statistical likelihood of its being in a certain
> > encoding/[language] even if it's in an unspecified 8859-* encoding.
> > It would be quite an interesting exercise, but I'd be surprised if
> > someone hasn't done it before.  Perhaps someone here knows.
> 
> http://www.let.rug.nl/~vannoord/TextCat/ has a paper on the subject
> and an implemenation in Perl. http://mnogosearch.org has an alternate
> implementation in compiled code (called mguesser).
> --
> ___________________________________________________________
> Sign-up for Ads Free at Mail.com
> http://promo.mail.com/adsfreejump.htm

-- 
-------------------------------------------------------------
Tex Texin   cell: +1 781 789 1898   mailto:[EMAIL PROTECTED]
Xen Master                          http://www.i18nGuy.com
                         
XenCraft                            http://www.XenCraft.com
Making e-Business Work Around the World
-------------------------------------------------------------


Reply via email to