I have two questions.

First, Is there a tokenizer that takes every word and simply makes a token
out of it?  So it looks for two white spaces and takes the characters
between them and makes a token out of them?

If this tokenizer exists, is there a difference between doing that and
simply storing the field in the document with Field.Index = UN_TOKENIZED?

--JP

Reply via email to