Hi:

   We have a multi-languaged index and we need to match accented
characters with non accented characters. For example, if a document
contains: mângão, the query: mangao should match it.

    I guess I would have to build some sort of analyzer/tokenizer for this.

    I was wondering if there are tokenizers already built for lucene.


Thanks

-John

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to