Hi ,
Yes. Analyzer was the culprit behind eating away some of the letters
in the search string . StandardAnalyser has 'a' and 's' as stop words
(amongst others).
Since i want to search on these (specifically , i want to search on
words like a/s , e/p , 15% , 15' ..etc). so i commented the
Hi Jason ,
yes , the doc'n does mention escaping . but thats only for special
characters used in queries , right ?
but i've tried 'escaping' too.
to answer ur question , am sure it is not HTTP request which is eating it up.
Query query = MultiFieldQueryParser.parse(test/s,
Without looking at the source, my guess is that StandardAnalyzer (and
StandardTokenizer) is the culprit. The StandardAnalyzer grammar (in
StandardTokenizer.jj) is probably defined so x/y parses into two
tokens, x and y. s is a default stopword (see
StopAnalyzer.ENGLISH_STOP_WORDS), so it gets
Hi ,
Is there a way to search for words that contain / or % .
if my query is test/s , it is just taken as test
if my query is test/p , it is just taken as test p
has anyone done this / faced such an issue ?
Regards
Robin
Lucene doco mentions escaping, but doesn't include the / char...
--
Lucene supports escaping special characters that are part of the query
syntax. The current list special characters are
+ - || ! ( ) { } [ ] ^ ~ * ? : \
To escape these character use the \ before the character. For example