I figure out the problem,

I custom an NGramFilter which takes the token's length as a default
maxGramSize,
and there are some documents fulled with non sense data like
'xakldjfklajsdfklajdslkf',
when the token is too big to do NGramFilter , it crushed the IndexWriter.



--
Sent from: http://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to