Yah, you're getting away with it due to the small data size. As
your data grows, the underlying mechanisms have to enumerate
every term in the field in order to find terms that match so it
can get _very_ expensive with large data sets.

Best to bite the bullet early or, better yet, see if you really need
to support this use-case.

Best,
Erick


On Fri, Sep 6, 2013 at 2:58 AM, Alvaro Cabrerizo <topor...@gmail.com> wrote:

> Hi:
>
> I would start looking:
>
> http://docs.lucidworks.com/display/solr/The+Standard+Query+Parser
>
> And the
> org.apache.lucene.queryparser.flexible.standard.StandardQueryParser.java
>
> Hope it helps.
>
> On Thu, Sep 5, 2013 at 11:30 PM, Scott Schneider <
> scott_schnei...@symantec.com> wrote:
>
> > Hello,
> >
> > I'm trying to find out how Solr runs a query for "*foo*".  Google tells
> me
> > that you need to use NGramFilterFactory for that kind of substring
> search,
> > but I find that even with very simple fieldTypes, it just works.
>  (Perhaps
> > because I'm testing on very small data sets, Solr is willing to look
> > through all the keywords.)  e.g. This works on the tutorial.
> >
> > Can someone tell me exactly how this works and/or point me to the Lucene
> > code that implements this?
> >
> > Thanks,
> > Scott
> >
> >
>

Reply via email to