Have you used Luke to see what is actually in the index? Or written
some test cases for your analyzer to know that the appropriate tokens
are coming out of your analyzer?
Also, could you give more details about the filters you are using? I
am not familiar w/ ExactTokensConstructorFilter,
Dear all,
We are using a Unified Analyzer as the analyzer of Lucene so as to be
able to index and search Arabic and English documents as well.
Here is the code:
public TokenStream tokenStream(String FieldName, Reader reader)
{
switch(analysisMode) {