Hi;

I fixed the problems. StopFilter was not working as accepted because of
letter cases. I've changed the flags of WordDelimiter. Also I've
changed TokenStream
to TokenFilter.

Thanks;
Furkan KAMACI


2014-02-26 20:05 GMT+02:00 Furkan KAMACI <furkankam...@gmail.com>:

> Hi;
>
> I have impelented that custom Analyzer:
>
> public class DisambiguatorAnalyzer extends Analyzer {
>
>    Version version = Version.LUCENE_46;
>    List<String> stopWordList;
>
>    public DisambiguatorAnalyzer(List<String> stopWordList) throws
> IOException {
>       super();
>       this.stopWordList = stopWordList;
>    }
>
>    @Override
>    protected TokenStreamComponents createComponents(String fieldName,
> Reader reader) {
>       Tokenizer source = new WhitespaceTokenizer(version, reader);
>       int flags = GENERATE_WORD_PARTS | CATENATE_WORDS;
>       TokenStream filter = new WordDelimiterFilter(source,
> WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE, flags, null);
>       filter = new StopFilter(version, filter,
> StopFilter.makeStopSet(version, stopWordList));
>       filter = new TurkishLowerCaseFilter(filter);
>       return new TokenStreamComponents(source, filter);
>    }
> }
>
> However it preserve originals and does not remove Stopwords. What maybe
> the wrong?
>
> Thanks;
> Furkan KAMACI
>

Reply via email to