NGramTokenizer shouldn't trim whitespace
----------------------------------------

                 Key: LUCENE-2947
                 URL: https://issues.apache.org/jira/browse/LUCENE-2947
             Project: Lucene - Java
          Issue Type: Bug
          Components: contrib/analyzers
    Affects Versions: 3.0.3
            Reporter: David Byrne
            Priority: Minor


Before I tokenize my strings, I am padding them with white space:

String foobar = " " + foo + " " + bar + " ";

When constructing term vectors from ngrams, this strategy has a couple 
benefits.  First, it places special emphasis on the starting and ending of a 
word.  Second, it improves the similarity between phrases with swapped words.  
" foo bar " matches " bar foo " more closely than "foo bar" matches "bar foo".

The problem is that Lucene's NGramTokenizer trims whitespace.  This forces me 
to do some preprocessing on my strings before I can tokenize them:

foobar.replaceAll(" ","$"); //arbitrary char not in my data

This is undocumented, so users won't realize their strings are being trim()'ed, 
unless they look through the source, or examine the tokens manually.

I am proposing NGramTokenizer should be changed to respect whitespace.  Is 
there a compelling reason against this?


-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to