SpanNearQuery makes sense, thanks for the link.
On Tue, 2011-09-13 at 14:40 +0100, Ian Lea wrote:
> Use a query with multiple clauses including a boosted PhraseQuery, or
> SpanNearQuery. I think the latter is the most flexible - see
> http://www.lucidimagination.com/blog/2009/07/18/the-spanquery/
I meant I was considering
instead of
**
Antonio ;)
On 13 September 2011 18:06, antonio roa wrote:
> Thanks a lot Chris.
>
> I changed the order to my filters as was considering:
>
>
>
>
> instead of
>
>
>
>
> Now it is running perfectly ;)
>
> Regards,
> Antonio.
>
>
> On 13 Septembe
Thanks a lot Chris.
I changed the order to my filters as was considering:
instead of
Now it is running perfectly ;)
Regards,
Antonio.
On 13 September 2011 17:48, Chris Hostetter wrote:
>
> : I am running an application using RemoveDuplicatesTokenFilter using
> : solr-core-3.1 and after
: I am running an application using RemoveDuplicatesTokenFilter using
: solr-core-3.1 and after using the analysis interface this filter does just
: nothing. I have debugged the source code of this filter and seems it is not
: detecting duplicates tokens.
Please note carefully the documentation..
Hi all,
I am running an application using RemoveDuplicatesTokenFilter using
solr-core-3.1 and after using the analysis interface this filter does just
nothing. I have debugged the source code of this filter and seems it is not
detecting duplicates tokens.
Do you know by any chance if there is/was
Use a query with multiple clauses including a boosted PhraseQuery, or
SpanNearQuery. I think the latter is the most flexible - see
http://www.lucidimagination.com/blog/2009/07/18/the-spanquery/ for
good info.
http://lucene.apache.org/java/3_3_0/queryparsersyntax.html tells you
how to use boosting
Hi Folks,
What is the simplest method of constructing a multi term query such that
the highest scoring document(s) is always that which contain all terms
in the query adjacent to each other?
i.e. if I search for "federal reserve" I would prefer documents that
contain "Ben Bernake is the chairman
Excellent!
OK I opened https://issues.apache.org/jira/browse/LUCENE-3432
Mike McCandless
http://blog.mikemccandless.com
On Tue, Sep 13, 2011 at 8:03 AM, wrote:
> OK. that worked.
> thanks,
> vincent
>
>
>
>
>
>
>
>
>
>
> Michael McCandless
>
>
> 13.09.2011 12:44
> Please respond to
> java-us
OK. that worked.
thanks,
vincent
Michael McCandless
13.09.2011 12:44
Please respond to
java-user@lucene.apache.org
To
java-user@lucene.apache.org
cc
Subject
Re: optimize with num segments > 1 index keeps growing
OK thanks for the infoStream output -- it was very helpful!
OK thanks for the infoStream output -- it was very helpful!
It looks like you have a single large segment that has deletions... it
could be it's over the max merge size. Can you try setting
tmp.setMaxMergedSegmentMB to something very large and see if the
expunge then runs?
I think TMP shouldn't
Hi
I am using QueryTermExtractor.getTerms for finding the terms of a given
query in lucene 3.0.3. In it's document has said that "Utility class used to
extract the terms used in a query, plus any weights. This class will not
find terms for MultiTermQuery, RangeQuery and PrefixQuery classes so the
c
Could you possibly explain a bit more what you mean? Perhaps a code snippet?
My own thoughts were to use a custom analyzer and in it apply a filter which
strips out apostrophes from the tokens. This works very well and my search
returns all the valid matches but the highlighter returns no best f
12 matches
Mail list logo