The omitNorms="true" attribute worked perfectly, thanks Yonik!

Also, the stopword issue isn't happening anymore, go figure.  I
probably had a mistype or something as well,  Thanks for the help!

-Reece


On Feb 18, 2008 7:17 PM, Reece <[EMAIL PROTECTED]> wrote:
> For #1, I just testing again and found the problem.
> WordDelimiterFilterFactory was splitting the words up because it had
> capitals in the middle of the word, so a lower case version was seen
> as a different set of tokens.
>
> For #2, I'll try using that attribute for the fieldtype and let you
> know how it goes, but that looks like exactly what I needed.
>
> For #3, I'll test it again tomorrow and make sure I didn't have a
> mistype or something.
>
> Thanks for the help!
>
> -Reece
>
>
>
>
> On Feb 18, 2008 5:11 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> >
> > On Feb 18, 2008 5:05 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> > > On Feb 18, 2008 4:42 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> > > > Hmmm, looks like a recent change in lucene probably causes this bug.
> > >
> > > Nope... I just checked Solr 1.2, and it shows the same behavior.
> > > With the example data, a query of
> > >   optimized for high
> > > finds the solr document, but
> > >   "optimized for high"
> > > does not.
> >
> > Scratch that... both Solr 1.2 and trunk seem to work fine for me.
> > My test was flawed because I was searching for "optimized for high"
> > while the solr document had a misspelling: Optimizied
> >
> > -Yonik
> >
>

Reply via email to