Hi Abhishek,
Actually softUpdate is about doing an update where the deletion is
performed via a soft delete rather than a hard delete.
To perform doc-value updates, you need to use the updateNumericDocValue or
updateBinaryDocValue APIs.
Note that it doesn't actually update in-place, it needs to
Thanks Adrien!
On Fri, Apr 12, 2024 at 9:49 AM Adrien Grand wrote:
> You are correct, query rewriting is not affected by the use of search vs.
> searchAfter.
>
> On Fri, Apr 12, 2024 at 3:37 PM Puneeth Bikkumanla
> wrote:
>
> > Hello,
> > Sorry I should have clarified what I meant by “optimized
You are correct, query rewriting is not affected by the use of search vs.
searchAfter.
On Fri, Apr 12, 2024 at 3:37 PM Puneeth Bikkumanla
wrote:
> Hello,
> Sorry I should have clarified what I meant by “optimized”. I am familiar
> with the collector/comparators using the “after” doc to filter ou
Hello,
Sorry I should have clarified what I meant by “optimized”. I am familiar
with the collector/comparators using the “after” doc to filter out
documents but I specifically was talking about the query rewriting phase.
Is the query rewritten differently in search vs searchAfter? Looking at the
co
Hello Puneeth,
When you pass an `after` doc, Lucene will filter out documents that compare
better than this `after` document if it can. See e.g. what LongComparator
does with its `topValue`, which is the value of the `after` doc.
On Thu, Apr 11, 2024 at 4:34 PM Puneeth Bikkumanla
wrote:
> Hello
Hello,
I think i tracked it further down to LUCENE-8589 or SOLR-12243:. When i leave
Solr's edismax' pf parameter empty, everything runs fast. When all fields are
configured for pf, the node dies.
I am now unsure whether i am on the right list, or if i should move to Solr's.
Please let me know
Are you specifying a sort clause on your query?
I'm not totally sure, but I think having a sort clause might be a
requirement for efficient deep paging.
I know Solr's cursorMark feature uses the searchAfter API, and a
cursorMark is essentially the sort values of the last document from
the previou
I have encountered the same problem, I wonder if anyone know the solution?
Regards,
Jacky
--
Sent from: http://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
-
To unsubscribe, e-mail: [email protected]
Hi Lucene Team,
Can you please reply to my query. Its a urgent issue and we need to resolve
it at the earliest.
Lucene Version used is 6.3.0 but even tried with the latest version 7.3.0.
Regards
Manish Gupta
--
Sent from: http://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
Thanks for a lot Siddhant Aggarwal for the quick response!
On Sun, Feb 18, 2018 at 3:22 PM, Sidhant Aggarwal wrote:
> Hi Aakanksha,
>
> You will need to use a boolean query to do this. In the boolean query
> first, add a clause for the distance attribute using MUST clause and then
> add another
Hi Aakanksha,
You will need to use a boolean query to do this. In the boolean query first,
add a clause for the distance attribute using MUST clause and then add another
timestamp query MUST clause.
Use this:
https://lucene.apache.org/core/6_1_0/core/org/apache/lucene/search/BooleanQuery.html
Thanks Mikhail!
I'll look there.
Happy new year )
Regards
Vadim Gindin
31 дек. 2017 г. 2:21 пользователь "Mikhail Khludnev"
написал:
> Literally it's done in Solr (excuse moi) via
> q=field1:(foo bar baz)^=3 field2:(foo bar baz)^=4 field3:(foo bar baz)^=5
> but it's absolutely wrong way to ap
Literally it's done in Solr (excuse moi) via
q=field1:(foo bar baz)^=3 field2:(foo bar baz)^=4 field3:(foo bar baz)^=5
but it's absolutely wrong way to approach the problem, you can find dismax
and white elephant problem in the Relevant Search by Mr Turnbull
On Tue, Dec 26, 2017 at 10:01 PM, Vadi
Mike,
I need the following. I want to create a query using the following
information: query string "blah blah blah" and constant scores map:
"field1" -> 3.0
"field2" -> 4.0
"field3" -> 5.0
// field1, field2, field3 - fields in the index.
The created query should search "blah blah blah" in each
Got it. I misunderstood the question (actually I'm still not convinced I
fully understand what you're looking for). It might be good to give an
example in case others on the mailing list are confused.
*Mike*
On Thu, Dec 14, 2017 at 8:54 AM, Vadim Gindin wrote:
> Mike,
>
> I don't need full do
Mike,
I don't need full doc match. I need a multi-field match and later I need to
know - what fields are matched for a document to be able to calculate other
multi-fields-oriented metrics.
Regards,
Vadim Gindin
On Thu, Dec 14, 2017 at 8:46 PM, Mike Dinescu (DNQ)
wrote:
> Apologies if I complet
Apologies if I completely misundetstood but if you are looking to do a full
doc match, you could duplicate duplicated the doc into another field that
is a true full text index of the document.
And search on that. Wouldn't that be exactly what you want?
On Thu, Dec 14, 2017 at 6:53 AM Vadim Gindin
Thanks Mikhail
Could you describe your sentences in more detail?
Vadim
On Thu, Dec 14, 2017 at 7:08 PM, Mikhail Khludnev wrote:
> Hello, Vadim.
>
> Please find inline.
>
> On Thu, Dec 14, 2017 at 11:43 AM, Vadim Gindin
> wrote:
>
> > Hi all.
> >
> > As I can understand. All Queries (or most o
Hello, Vadim.
Please find inline.
On Thu, Dec 14, 2017 at 11:43 AM, Vadim Gindin wrote:
> Hi all.
>
> As I can understand. All Queries (or most of them?) are single-field
> oriented. They may implement different search/score logic, but they are
> intended for a single field. For example, simple
Not part of Lucene, but take a look at LUCENE-5205 [1], which I actively
maintain on github [2].
And, you can integrate via maven [3]
See the jira issue for an overview of the query syntax, and let me know if you
have any questions.
[1] https://issues.apache.org/jira/browse/LUCENE-5205
[2] h
Hello,
You can check ComplexPhrase and Surround query parsers.
On Mon, Dec 5, 2016 at 8:12 AM, Yonghui Zhao wrote:
> It seems lucene query parser doesn't support SpanNearQuery.
> Is there any query parser supports SpanNearQuery?
>
--
Sincerely yours
Mikhail Khludnev
I am overriding getFieldQuery(String field, String fieldText,boolean
quoted). And in case of phrase query,
getFieldQuery(String field, String queryText, int slop) will be called.
And prefix query will not be my use case. So, we can ignore prefix query.
Assume this is my only case. Sequence of
This is likely tricky to do correctly.
E.g., MultiFieldQueryParser.getFieldQuery is invoked on whole chunks
of text. If you search for:
apple orange
I suspect it won't do what you want, since the whole string "apple
orange" is passed to getFieldQuery.
How do you want to handle e.g. a phrase
Thank you Dawid :)
--
Paweł Róg
On Thu, Nov 10, 2016 at 1:30 PM, Dawid Weiss wrote:
> This does look odd. I filed this issue to track it:
>
> https://issues.apache.org/jira/browse/LUCENE-7550
>
> But I can't promise you I'll have the time to look into this any time
> soon. Feel free to step dow
This does look odd. I filed this issue to track it:
https://issues.apache.org/jira/browse/LUCENE-7550
But I can't promise you I'll have the time to look into this any time
soon. Feel free to step down through the source and see why the
difference is there (patches welcome!).
On Wed, Nov 9, 2016
Hi Dawid,
Thanks for your email. It seems StandardQueryParser is free from
this unexpected behavior.
I used the code below with Lucene 6.2.1
(org.apache.lucene.queryparser.classic.QueryParser)
QueryParser parser = new QueryParser("test", new WhitespaceAnalyzer());
parser.setDefaultOperat
Which Lucene version and which query parser is this? Can you provide a
test case/ code sample?
I just tried with StandardQueryParser and for:
sqp.setDefaultOperator(StandardQueryConfigHandler.Operator.AND);
dump(sqp.parse("foo AND bar OR baz", "field_a"));
sqp.setDefaultOpe
Hi Eric,
Thank you for your email.
I understand that Lucene queries are not in boolean logic. My point is only
that I would expect identical Lucene queries build from the same input
string. My intuition says that default operator should not matter in 2
examples I presented in previous email.
--
Pa
Lucene queries aren't boolean logic. You can simulate boolean logic by
explicitly parenthesizing, here's an excellent blog on this:
https://lucidworks.com/blog/why-not-and-or-and-not/
Best,
Erick
On Wed, Nov 9, 2016 at 1:37 AM, Pawel Rog wrote:
> Hello ,
> I have a query `foo AND bar OR baz`. W
Hi Adrien, I had a chance to test and I see that there is one more
solution. For fields that we want to search for exist/doesn't exist add one
more indexed field, like "ex_field=1" and can search by: +ex_field=1 or
-ex_field=1. It works fast.
On Fri, Nov 13, 2015 at 5:21 AM, Adrien Grand wrote:
Note that if you only have two fields A and B, you could make it faster by
returning `docFreq(A)+docFreq(B)-IndexSearcher.count(A AND B)` rather than
`IndexSearcher.count(A OR B)` since Lucene is typically faster at running
conjunctions than disjunctions.
Le mer. 20 juil. 2016 à 15:41, Xiaolong Zh
Thanks! The use case that I am having is I am trying to calculate the
docFreq for the suggestion word which produced by my "did you
mean"/"spellcheck" feature.
I was trying to avoid to having a second search request. But it seems in
this case, I have to formula another search query to do the job.
There is no way to get this statistic in constant-time. If you need it for
scoring, you need to make approximations. For instance, BlendedTermQuery
uses the max of the doc freqs as the aggregated doc freq.
Otherwise, you can also compute this number by running a BooleanQuery with
one SHOULD clause
Hi Taher,
Please find and see QueryParser.jj file in the source tree.
You can find all operators such as && || AND OR !.
Ahmet
On Sunday, May 15, 2016 1:57 PM, Taher Galal wrote:
Hi All,
I was just checking the query grammer found in the java docs of the query
parser :
Query ::= ( Clause )
Hi Daniel,
Since you are restricting inOrder=true and proximity=0 in the top level query,
there is no problem in your particular example.
If you weren't restricting, injecting synonyms with plain OR, sometimes cause
'query drift': injection/addition of one term changes result list drastically.
Are you calling the IndexSearcher#explain method to get the details of the
score calculation?
How exactly are your results not what you expect?
What Similarity are you using? Scores will be the product of the underlying
calculated scores and you term boost values.
-- Jack Krupansky
On Thu, Mar
Hi Vlad,
This is something that you generally can't do. If you have doc values
enabled on your fields, you can use Lucene's FieldValueQuery, but beware
that this query is very slow. Otherwise if your field is indexed, you can
run a TermRangeQuery that has both bounds open but this will be even slo
I did some analysis with access-control lists and found that our
customers have significant overlap in the documents they have access to,
so we would be able to realize very nice compression in the number of
terms in access control queries by indexing overlapping subsets.
However this is a fai
For queries with many terms, where each term matches few documents
(actually a single document for "ID filters" in my tests), I saw
speedups between 4x and 8x
http://heliosearch.org/solr-terms-query/ (the 3rd chart)
-Yonik
http://heliosearch.org - native code faceting, facet functions,
sub-facets
I suggested TermsFilter, not TermFilter :) Note the sneaky extra s
Mike McCandless
http://blog.mikemccandless.com
On Wed, Oct 29, 2014 at 8:20 AM, Pawel Rog wrote:
> Hi,
> I already tried to transform Queries to filter (TermQuery -> TermFilter)
> but didn't see much speed up. I wrote tha
Hi,
I already tried to transform Queries to filter (TermQuery -> TermFilter)
but didn't see much speed up. I wrote that wrapped this filter into
ConstantScoreQuery and in other test I used FilteredQuery with
MatchAllDocsQuery and BooleanFilter. Both cases seems to work quite similar
in terms of pe
I'm curious to know more about your use case, because I have an idea for
something that addresses this, but haven't found the opportunity to
develop it yet - maybe somebody else wants to :). The basic idea is to
reduce the number of terms needed to be looked up by collapsing
commonly-occurring
Are the clauses simple TermQuery? If so, try TermsFilter: it sorts
the terms which should give some [small] speedup when visiting them in
the terms dict, and it reuses a single TermsEnum across all terms.
Mike McCandless
http://blog.mikemccandless.com
On Tue, Oct 28, 2014 at 9:40 PM, Pawel Ro
Hi András,
Thank you for you answer. I read the links you sent and I think the
following sentence :
"Lastly, it is not possible to “cross reference” between nested
documents. One nested doc cannot “see” another nested doc’s properties.
For example, you are not able to filter on “A.name” but
Hello Aurélien,
I believe the approach you described is what Elasticsearch is taking with
nested documents, in addition to indexing parent and child documents in a
single block. See the "sidebar" at the bottom of [1] and the sections
labeled "nested" of [2] for more details.
Michael's blog post o
Hi again,
Maybe the only way to do this is to use nested documents and to index
data both in child documents and in flattened form in the parent
document. Then we can run the two different queries.
Any other (better) idea?
Regards,
Aurélien
Le 20/10/2014 13:40, aurelien.mazo...@francelabs.
Hi again,
I see I missed very important thing in your response. I thought I cannot
reuse rewritten queries in different types of IndexReader but you wrote I
cannot use rewritten queries even in another instance of IndexReader: "not
even if it's a reopened reader against the same index".
I thought
Hi,
Thank you for your response Chris. I see good news that I can pre-build
rewritten queries for a given IndexReader and then use it in the same
IndexReader. Can you tell me how I can achieve this?
I see each Query has rewrite method which takes IndexReader as an argument.
The only thing is just
: In the system which I develop I have to store many query objects in memory.
: The system also receives documents. For each document MemoryIndex is
: instantiated. I execute all stored queries on this MemoryIndex. I realized
: that searching over MemoryIndex takes much time for query rewriting. I'
Please elaborate on what you expect will be in this payload. Is it
information derived from the indexing process itself or is it external
information to be added to the indexed terms?
-- Jack Krupansky
-Original Message-
From: Mrugendra
Sent: Sunday, March 2, 2014 5:15 AM
To: java-us
Searching by child query alone will just find matching child docs,
scored "normally".
I.e., nothing special (for block join) happens in that case, unless
you are using the block join collector.
Mike McCandless
http://blog.mikemccandless.com
On Fri, Feb 21, 2014 at 1:38 AM, Priyanka Tufchi
wro
gt; > >
> > > On Tue, Oct 1, 2013 at 4:10 PM, Desidero wrote:
> > >
> > > > Uwe,
> > > >
> > > > I was using a bounded thread pool.
> > > >
> > > > I don't know if the problem was the task overload or something
y of searching a single segment rather than iterating
> > over
> > > multiple AtomicReaderContexts, but I'd lean toward task overload. I
> will
> > do
> > > some testing tonight to find out for sure.
> > >
> > > Matt
> > >
ating
> over
> > multiple AtomicReaderContexts, but I'd lean toward task overload. I will
> do
> > some testing tonight to find out for sure.
> >
> > Matt
> > Hi,
> >
> > use a bounded thread pool.
> >
> > Uwe
> >
> > -----
>
-
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: [email protected]
>
>
> > -Original Message-
> > From: Desidero [mailto:[email protected]]
> > Sent: Tuesday, October 01, 2013 11:37 PM
> > To: java-use
se a bounded thread pool.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: [email protected]
>
>
> > -Original Message-
> > From: Desidero [mailto:[email protected]]
> > Sent: Tues
PM
> To: [email protected]
> Subject: Re: Query performance in Lucene 4.x
>
> For anyone who was wondering, this was actually resolved in a different
> thread today. I misread the information in the
> IndexSearcher(IndexReader,ExecutorService) constructor documentation - I
&
e.apache.org
> Subject: Re: Query performance in Lucene 4.x
>
> For anyone who was wondering, this was actually resolved in a different
> thread today. I misread the information in the
> IndexSearcher(IndexReader,ExecutorService) constructor documentation - I
> was under the impre
For anyone who was wondering, this was actually resolved in a different
thread today. I misread the information in the
IndexSearcher(IndexReader,ExecutorService) constructor documentation - I
was under the impression that it was submitting a thread for each index
shard (MultiReader wraps 20 shards,
Erick,
Thank you for responding.
I ran tests using both compressed fields and uncompressed fields, and it
was significantly slower with uncompressed fields. I looked into the lazy
field loading per your suggestion, but we don't get any values from the
returned Documents until the result set has b
Hmmm, since 4.1, fields have been stored compressed by default.
I suppose it's possible that this is a result of compressing/uncompressing.
What happens if
1> you enable lazy field loading
2> don't load any fields?
FWIW,
Erick
On Thu, Sep 26, 2013 at 10:55 AM, Desidero wrote:
> A quick update:
A quick update:
In order to confirm that none of the standard migration changes had a
negative effect on performance, I ported my Lucene 4.x version back to
Lucene 3.6.2 and kept the newer API rather than using the custom
ParallelMultiSearcher and other deprecated methods/classes.
Performance in
x27;t want, then you need to escape it.
-- Jack Krupansky
-Original Message-
From: Ankit Murarka
Sent: Thursday, September 12, 2013 11:36 AM
To: [email protected]
Subject: Re: Query type always Boolean Query even if * and ? are present.
BingoThis has solved my case... Thanks
AM
To: [email protected]
Subject: Re: Query type always Boolean Query even if * and ? are present.
If I remove the escape call from the function, then it works as
expected.. Prefix/Boolean/Wildcard..
But this is NOT what I want... The escape should be present else I will
get lexical err
The trailing asterisk in your query input is escaped with a backslash, so
the query parser will not treat it as a wildcard.
-- Jack Krupansky
-Original Message-
From: Ankit Murarka
Sent: Thursday, September 12, 2013 10:19 AM
To: [email protected]
Subject: Query type always B
[email protected]
Subject: Re: Query type always Boolean Query even if * and ? are present.
If I remove the escape call from the function, then it works as
expected.. Prefix/Boolean/Wildcard..
But this is NOT what I want... The escape should be present else I will
get lexical error in ca
I also tried it with this query:
*
I am still getting it as Boolean Query.. It should be Prefix...
On 9/12/2013 8:50 PM, Jack Krupansky wrote:
The trailing asterisk in your query input is escaped with a backslash,
so the query parser will not treat it as a wildcard.
-- Jack Krupansky
-O
If I remove the escape call from the function, then it works as
expected.. Prefix/Boolean/Wildcard..
But this is NOT what I want... The escape should be present else I will
get lexical error in case of Prefix/Boolean/Wildcard since my input will
definitely contain special characters...
Help
You probably want something more like "electro hydraulic power assist
steering"~5,
quote marks and all. And note that it's not quite "within 5 positions",
it's more
"up to five single-word transpositions" which is kind of a slippery concept.
"electro hydraulic assist power steering"~5 would requi
Sorry, hit send by accident previously. Anyway, I wanted to make sure my
interpretation of this query was correct:
+(((content:electro) (content:hydraulic) (content:power) (content:assist)
(content:steer))~5)
This is staying that all words: electro, hydraulic, power, assist and steer
in the conte
Krupansky
-Original Message-
From: Michael Sokolov
Sent: Sunday, August 04, 2013 4:55 PM
To: [email protected]
Cc: Denis Bazhenov
Subject: Re: Query serialization/deserialization
On 07/28/2013 07:32 PM, Denis Bazhenov wrote:
A full JSON query ser/deser would be an especially nice
On 07/28/2013 07:32 PM, Denis Bazhenov wrote:
A full JSON query ser/deser would be an especially nice additionto Solr,
allowing direct access to all Lucene Query features even if they haven't been
integrated into the higher level query parsers.
There is nothing we could do, so we wrote one, in
> A full JSON query ser/deser would be an especially nice additionto Solr,
> allowing direct access to all Lucene Query features even if they haven't been
> integrated into the higher level query parsers.
There is nothing we could do, so we wrote one, in fact :) I'll try to elaborate
with the t
Yeah, it's a shame such a ser/deser feature isn't available in Lucene.
My idea is to have a separate module that the Query classes can delegate to
for serialization and deserialization, handling recursion for nested query
objects, and then have modules for XML, JSON, and a pseudo-Java functiona
Hi Denis,
Indeed, Query.toString() only tries to give a human-understandable
representation of what the query searches for and doesn't guarantee
that it can be parsed again and would give the same query. We don't
provide tools to serialize queries but since query parsing is usually
lightweight com
We don't commonly use the term "query expansion" for Lucene and Solr, but I
would say that there are two categories of "QE":
1. Lightweight QE, by which I mean things like synonym expansion, stemming,
stopword removal, spellcheck, and anything else that modifies the raw query
in any way that a
Sounds like you need a PhraseQuery.
-Original Message-
From: madan mp [mailto:[email protected]]
Sent: Wednesday, July 17, 2013 7:40 AM
To: [email protected]
Subject: query on exact match in lucene
how to get exact string match
ex- i am searching for file which consist of s
al Message-
From: Ross Simpson
Sent: Wednesday, May 22, 2013 7:44 AM
To: [email protected]
Subject: Re: Query with phrases, wildcards and fuzziness
One further question:
If I wanted to construct my query using Query implementations instead of
a QueryParser (e.g. TermQuery, WildcardQu
One further question:
If I wanted to construct my query using Query implementations instead of
a QueryParser (e.g. TermQuery, WildcardQuery, etc.), what's the right
way to duplicate the "OR" functionality I wrote about below? As I
mentioned, I've read that wrapping query objects in a BooleanQ
Jack, thanks very much! I wasn't considering a space a special character for
some reason. That has worked perfectly.
Cheers,
Ross
On May 22, 2013, at 10:24 AM, Jack Krupansky wrote:
> Just escape embedded spaces with a backslash.
>
> -- Jack Krupansky
>
> -Original Message- From: R
Just escape embedded spaces with a backslash.
-- Jack Krupansky
-Original Message-
From: Ross Simpson
Sent: Tuesday, May 21, 2013 8:08 PM
To: [email protected]
Subject: Query with phrases, wildcards and fuzziness
Hi all,
I'm trying to create a fairly complex query, and havi
No problem. Glad you found the error. It's always in the custom code
somewhere.
--
Ian.
On Mon, Jan 14, 2013 at 12:04 PM, Hankyu Kim wrote:
> I just found the cause of error and you were right about my code being the
> source.
> I used "Character.getNumericValue(termBuffer[0]) == -1" to test
I just found the cause of error and you were right about my code being the
source.
I used "Character.getNumericValue(termBuffer[0]) == -1" to test if
termBuffer[0] is equal to null, but apparently the special characters
return -1 as well when given as parameter.
Thank you for your help.
2013/1/14
I did intend to ignore all the spaces, so that's not the problem.
Here's the tokenization chain in customAnalyser class, extending Analyser
@Override
protected TokenStreamComponents createComponents(String fieldName,
Reader reader) {
NGramTokenizer src = new NGramTokenizer(matchVer
In fact I see you are ignoring all spaces between words. Maybe that's
deliberate. Break it down into the smallest possible complete code
sample that shows the problem and post that.
--
Ian.
On Mon, Jan 14, 2013 at 11:02 AM, Ian Lea wrote:
> It won't be IndexWriter or IndexWriterConfig. What
It won't be IndexWriter or IndexWriterConfig. What exactly does your
analyzer do - what is the full chain of tokenization? Are you saying
that ':)a' and ')an' are not indexed? Surely that is correct given
your input with a space after the :). And before as well so 's:)', is
also suspect.
--
I
I'm working with Lucene 4.0 and I didn't use lucene's QueryParser, so
setAllowLeadingWildcard() is irrelevant.
I also realised the issue wasn't with querying, but it was indexing whihch
left the terms with leading special character out.
My goal was to do a fuzzymatch by creating a trigram index. T
QueryParser has a setAllowLeadingWildcard() method. Could that be relevant?
What version of lucene? Can you post some simple examples of what
does/doesn't work? Post the smallest possible, but complete, code that
demonstrates the problem?
With any question that mentions a custom version of som
y Funstein
Sent: Thursday, October 25, 2012 8:55 PM
To: [email protected]
Subject: Re: query for documents WITHOUT a field?
This is the QueryParser syntax, right? So an API equivalent for the not
null case would be something like this?
BooleanQuery q = new BooleanQuery();
q.add(new Boolean
ld be "OR (*:* -allergies:[* TO *])" in
> Lucene/Solr.
>
> -- Jack Krupansky
>
> -Original Message- From: Vitaly Funstein
> Sent: Thursday, October 25, 2012 8:25 PM
> To: [email protected]
> Subject: Re: query for documents WITHOUT a field?
>
>
"OR allergies IS NULL" would be "OR (*:* -allergies:[* TO *])" in
Lucene/Solr.
-- Jack Krupansky
-Original Message-
From: Vitaly Funstein
Sent: Thursday, October 25, 2012 8:25 PM
To: [email protected]
Subject: Re: query for documents WITHOUT a field?
So
se and
> > PrefixQuery(field, "") as MUST_NOT clause. But the PrefixQuery will do a
> > full term index scan without caching :-). You may use
> CachingWrapperFilter
> > with PrefixFilter instead.
> >
> > -
> > Uwe Schindler
> > H.-H.-M
No. See the FAQ.
http://wiki.apache.org/lucene-java/LuceneFAQ#How_do_I_update_a_document_or_a_set_of_documents_that_are_already_indexed.3F
There are a couple of ideas floating around e.g.
http://www.flax.co.uk/blog/2012/06/22/updating-individual-fields-in-lucene-with-a-redis-backed-codec/
or http
Hi there,
Is it possible to update a document in the lucene index with an
additional field?
I have a massive index and would like to add a numeric field with a date
in number format into each document.
This is to perform searches with NumericRangeFilters using the dates as
numbers when search
org.apache.lucene.index.PKIndexSplitter in contrib-misc sounds promising.
www.slideshare.net/abial/eurocon2010 "Munching & crunching - Lucene
index post-processing" sounds well worth a look too.
Or just build new indexes from scratch routing docs to the correct
index however you choose.
--
Ian
Hello Ivan
Thanks for the reply.
1. I tried to use lucene .It stored the data on indexes on the hard
disk.The search works well.How of the ehcache above lucene will help
performane.
2. Will Lucene used alone give better performance. *OR *Will ehcache
used above Lucene give better pe
A cache should be independent of the data store. Ehcache works well in
front of Lucene as well as a (relational) database. However, caches
work great for key/value data, so the cache value would be a result
set. Is caching the grouped result good enough?
--
Ivan
On Tue, Apr 10, 2012 at 1:40 PM,
Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: [email protected]
>
>
>> -Original Message-
>> From: Tim Eck [mailto:[email protected]]
>> Sent: Thursday, February 16, 2012 10:14 PM
>> To: [email protected]
>>
t caching :-). You may use CachingWrapperFilter
> with PrefixFilter instead.
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: [email protected]
>
>
>> -Original Message-
>> From: Tim Eck [mailto:tim
ngWrapperFilter
with PrefixFilter instead.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: [email protected]
> -Original Message-
> From: Tim Eck [mailto:[email protected]]
> Sent: Thursday, February 16, 2012 10:14 PM
> To: [email protected]
1 - 100 of 376 matches
Mail list logo