Hello,
Please check warning box titled Using which
https://lucene.apache.org/solr/guide/8_5/other-parsers.html#block-join-parent-query-parser
On Wed, Jun 24, 2020 at 10:01 AM Tor-Magne Stien Hagen wrote:
> Hi,
>
> I have indexed the following nested document in Solr:
>
> {
uot;,
"children": [
{
"id": "5",
"class": "instruction"
}
]
}
]
}
Given the following query:
{!parent which='id:4'}id:3
I expect the result to be
Is there any other option?
Sent from Outlook<http://aka.ms/weboutlook>
From: Mikhail Khludnev
Sent: Sunday, May 24, 2020 3:24 AM
To: solr-user
Subject: Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
Unfortunately {!terms} doesn't l
Check your “df” parameter in all your handlers in solrconfig.xml.
Second, add "=query” to the query and look at the parsed
return, you’ll probably see something field qualified by “text:….”
Offhand, though, I don’t see where that’s happening in your query.
wait, how are you submi
On 6/15/2020 2:52 PM, Deepu wrote:
sample query is
"{!complexphrase inOrder=true}(all_text_txt_enus:\"by\\ test*\") AND
(({!terms f=product_id_l}959945,959959,959960,959961,959962,959963)
AND (date_created_at_rdt:[2020-04-07T01:23:09Z TO *} AND
date_created_at_rdt:{* TO 2020-
things or to override the default
operator order.
https://lucene.apache.org/solr/guide/8_5/the-standard-query-parser.html#escaping-special-characters
The edismax parser supports a superset of what the standard (lucene)
parser does, so they have the same special characters.
Thanks,
Shawn
Hi All,
i am trying to use {!complexphrasequeryparser inOrder=true} along with
other text fields. i am using solrj client to initiate the request.
sample query is
"{!complexphrase inOrder=true}(all_text_txt_enus:\"by\\ test*\") AND
(({!terms
on.
>
> -Original Message-
> From: Markus Jelsma
> Sent: Saturday, June 13, 2020 4:57 AM
> To: solr-user@lucene.apache.org
> Subject: RE: eDismax query syntax question
>
> Hello,
>
> These are special characters, if you don't need them, you must escape
Hi Webster,
what does the query debug say? if you set debug=true in the request you can
have a better idea about how the two queries get interpreted
Andrea
On Mon, 15 Jun 2020 at 16:01, Webster Homer <
webster.ho...@milliporesigma.com> wrote:
> Markus,
> Thanks, for t
Jelsma
Sent: Saturday, June 13, 2020 4:57 AM
To: solr-user@lucene.apache.org
Subject: RE: eDismax query syntax question
Hello,
These are special characters, if you don't need them, you must escape them.
See top of the article:
https://lucene.apache.org/solr/guide/8_5/the-extended-dismax-que
Hello,
These are special characters, if you don't need them, you must escape them.
See top of the article:
https://lucene.apache.org/solr/guide/8_5/the-extended-dismax-query-parser.html
Markus
-Original message-
> From:Webster Homer
> Sent: Friday 12th June 2020 22:09
>
Recently we found strange behavior in a query. We use eDismax as the query
parser.
This is the query term:
1,3-DIMETHYL-5-(3-PHENYL-ALLYLIDENE)-PYRIMIDINE-2,4,6-TRIONE
It should hit one document in our index. It does not. However, if you use the
Dismax query parser it does match the record
Guilherme,
The answer is likely to be dependent on the query parser, query parser
configuration, and analysis chains. If you post those it could aid in
helping troubleshoot. One thing that jumps to mind is the asterisks
("*") -- if they're interpreted as wildcards, that could be
problem
___
> De : Guilherme Viteri
> Envoyé : 10 juin 2020 16:57
> À : solr-user@lucene.apache.org
> Objet : [EXTERNAL] - SolR OOM error due to query injection
>
> Hi,
>
> Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setup has
>
@lucene.apache.org
Objet : [EXTERNAL] - SolR OOM error due to query injection
Hi,
Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setup has
been running for at least 4 years without having OutOfMemory error. (it is
never too late for an OOM…)
This week, our search tool has
. These requests weren’t aggressive that stressed the server with
an excessive number of hits, however 5 to 10 request of this nature was enough
to crash the server.
I’ve come across a this link
https://stackoverflow.com/questions/26862474/prevent-from-solr-query-injections-when-using-solrj
<ht
or probably -director_id:[* TO *]
On Mon, Jun 8, 2020 at 10:56 PM Hari Iyer wrote:
> Hi,
>
> It appears that a query criteria is mandatory for a join. Taking this
> example from the documentation: fq={!join from=id fromIndex=movie_directors
> to=director_id}has_oscar:true.
Hi,
It appears that a query criteria is mandatory for a join. Taking this example
from the documentation: fq={!join from=id fromIndex=movie_directors
to=director_id}has_oscar:true. What if I want to find all movies that have a
director (regardless of whether they have won an Oscar
Thanks for the support Erick. Not using the “qf" parameter at all seems to give
me valid query results now. The query debug information:
"debug":{ "rawquerystring":"claims_en:(An English sentence) description_en:(An
English sentence) claims_de:(Ein Deutscher Satz)
Let’s see the results of adding =query to the query, in particular the
parsed version.
Because what you’re reporting doesn’t really make sense. edismax should be
totally
ignoring the “qf” parameter since you’re specifically qualifying all the
clauses with
a field. Unless you’re not really
the index will look something like this:
* text_part1_en: empty
* text_part2_en: empty
* text_part1_de: German text
* text_part2_de: Another German text
For an English document it will be the other way around.
What I want to achieve: A user entering a query in English should
Is this odd use-case where one needs to convert Lucene query to Solr query?
Isn't this normal use-case when somebody is trying to port their Lucene
code to Solr?
I mean, is it like a XY problem where I should not even run into this
problem in the first place?
On Sun, May 31, 2020 at 9:40 AM
There's nothing like this now. Presumably one might visit queries and
generate Query DSL json, but it might be a challenging problem.
On Sun, May 31, 2020 at 3:42 AM gnandre wrote:
> I think this question here in this thread is similar to my question.
>
> https://lucene.472066.n3.n
I think this question here in this thread is similar to my question.
https://lucene.472066.n3.nabble.com/Lucene-Query-to-Solr-query-td493751.html
As suggested in that thread, I do not want to use toString method for
Lucene query to pass it to the q param in SolrQuery.
I am looking
with the next bit needed.
In your case, because you have such large indexes relative to your physical
memory, you’re having to re-read indexes into memory from disk quite often.
Then you query collection2 and guess what? The
search you just ran on collection1 may have replaced many of the pages
Thanks again, Erick, for pointing us in the right direction.
Yes, I am seeing heavy disk I/O while querying. I queried a single
collection. A query for 10 rows can cause 100-150 MB disk read on each
node. While querying for a 1000 rows, disk read is in range of 2-7 GB per
node.
Is this normal? I
edismas is quite different from straight Lucene.
Try attaching =query to the input and
you’ll see the difference.
Best,
Erick
> On May 30, 2020, at 12:32 AM, gnandre wrote:
>
> Hi,
>
> I have following query which works fine as a lucene query:
> +(topics:132)^0.02
Hi,
I have following query which works fine as a lucene query:
+(topics:132)^0.02607211 (topics:146)^0.008187325
-asset_id:doc:en:index.html
But, it does not work if I use it as a solr query with lucene as defType.
For it to work, I need to convert it like following:
q=+((topics:132)^0.02607211
node has
28 replicas/node and handles over a terabyte of index in aggregate. At first
blush, you’ve overloaded your hardware. My guess here is that one node or
the other has to do a lot of swapping/gc/whatever quite regularly when
you query. Given that you’re on HDDs, this can be quite expensive
is
my observation. I ran queries with the debugQuery param and found that the
query response time depends on the worst performing shard as some of the
shards take longer to execute the query than other shards.
Here are my questions:
1. Is decreasing number of shards going to h
gus. If you run the tests in the order you
gave, the first one will read the necessary data from disk and probably have it
in the OS disk cache for the second and subsequent. And/or you’re getting
results from your queryResultCache (although you’d have to have a big one).
Specifying the exact
I have a Solr cloud setup (Solr 7.4) with a collection "test" having two
shards on two different nodes. There are 4M records equally distributed
across the shards.
If I query the collection like below, it is slow.
http://localhost:8983/solr/*test*/select?q=*:*=10
QTime: 6930
Unfortunately {!terms} doesn't let one ^boost terms.
On Sat, May 23, 2020 at 10:13 AM vishal patel
wrote:
> Hi Jason
>
> Thanks for reply.
>
> I have checked jay's query using "terms" query parser and it is really
> helpful to us. After execute using "terms&
Hi Jason
Thanks for reply.
I have checked jay's query using "terms" query parser and it is really helpful
to us. After execute using "terms" query parser it will come within a 500
milliseconds even though grouping is applied.
Jay's Query :
https://drive.google.com/file/d/
ing:
> I have a main query who define a subquery called group with "fields":
> "*,group:[subquery]",
> the group document has a lot of fields, but I want to filter the main query
> based on one of them.
> ex:
> {
> PID:1,
> type:doc,
> "group"
Hi Jay,
I can't speak to why you're seeing a performance change between 6.x
and 8.x. What I can suggest though is an alternative way of
formulating the query: you might get different performance if you run
your query using Solr's "terms" query parser:
https://lucene.apache.org/solr
Hello, I need to do the following:
I have a main query who define a subquery called group with "fields":
"*,group:[subquery]",
the group document has a lot of fields, but I want to filter the main query
based on one of them.
ex:
{
PID:1,
type:doc,
"group"
ppens if you reexecute the query?
Not more visible difference. Minor change in milliseconds.
>Are there other processes/containers running on the same VM?
No
>How much heap and how much total memory you have?
My heap and total memory are same as Solr 6.1.0. heap memory 5 gb and total
memo
Did you create Solrconfig.xml for the collection from scratch after upgrading
and reindexing? Was it based on the latest template?
If not then please try this. Maybe also you need to increase the corresponding
caches in the config.
What happens if you reexecute the query?
Are there other
Any one is looking this issue?
I got same issue.
Regards,
Vishal Patel
From: jay harkhani
Sent: Wednesday, May 20, 2020 7:39 PM
To: solr-user@lucene.apache.org
Subject: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
Hello,
Currently I upgrade
have sets of params that repeat often but not always, you
could do some variable substitutions to loop them in with paramSets
5) Move the sorting query into a boost query, just for clarity of intent
Regards,
Alex.
On Tue, 19 May 2020 at 10:16, vishal patel
wrote:
>
>
> Which que
Hello,
Currently I upgrade Solr version from 6.1.0 to 8.5.1 and come across one issue.
Query which have more ids (around 3000) and grouping is applied takes more time
to execute. In Solr 6.1.0 it takes 677ms and in Solr 8.5.1 it takes 26090ms.
While take reading we have same solr schema
Hi, I don't think query size can affect the kind of the parser chosen. I
remember there is a maximum number of boolean clause (maxBooleanClauses),
but this a slight different thing.
If the query is too large, you can have an http error (bad request?), I
don't remember, well just change the http
Which query parser is used if my query length is large?
My query is
https://drive.google.com/file/d/1P609VQReKM0IBzljvG2PDnyJcfv1P3Dz/view
Regards,
Vishal Patel
Any one is looking my issue? Due to this issue I can not upgrade Solr 8.3.0.
regards,
Vishal Patel
From: vishal patel
Sent: Sunday, May 17, 2020 11:49 AM
To: solr-user
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
Solr 6.1.0 : 1881
From: vishal patel
Sent: Sunday, May 17, 2020 11:04 AM
To: solr-user
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
Thanks for reply.
I know Query field value is large. But same thing is working fine in Solr 6.1.0
and query executed within 300 milliseconds
Thanks for reply.
I know Query field value is large. But same thing is working fine in Solr 6.1.0
and query executed within 300 milliseconds. Schema.xml and Solrconfig.xml are
same. Why is it taking lots of time for execution in Solr 8.3.0?
Is there any changes in Solr 8.3.0?
Regards,
Vishal
)
org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)
It seems like it ranks groups by query score, that doubtful thing to do.
>From the log. Here's how to recognize query running 25 sec "QTime=25063"
Query itself q=+msg_id:(10519539+10519540+10523575+10523576+ ... is
not what search eng
Thanks for reply.
I have taken a thread dump at the time of query execution. I do not know the
thread name so send the All threads. I have also send the logs so you can get
idea.
Thread Dump All Stack Trace:
https://drive.google.com/file/d/1N4rVXJoaAwNvPIY2aw57gKA9mb4vRTMR/view
Solr 8.3 shard
Can you check Thread Dump in Solr Admin while Solr 8.3 crunches query for
34 seconds? Please share the deepest thread stack. This might give a clue
what's going on there.
On Sat, May 16, 2020 at 11:46 AM vishal patel
wrote:
> Any one is looking my issue? Please help me.
>
> Sent fro
Any one is looking my issue? Please help me.
Sent from Outlook<http://aka.ms/weboutlook>
From: vishal patel
Sent: Friday, May 15, 2020 3:06 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
Well, in a way, QTime can depend on the total number of terms existing in
the core.
It would have been better if you had posted sample query and analysis
chain.
On Mon, 11 May 2020 at 11:45, Anshuman Singh
wrote:
> Suppose I have two phone numbers P1 and P2 and the number of records with
&
I have result of query debug for both version so It will helpful.
Solr 6.1 query debug URL
https://drive.google.com/file/d/1ixqpgAXsVLDZA-aUobJLrMOOefZX2NL1/view
Solr 8.3.1 query debug URL
https://drive.google.com/file/d/1MOKVE-iPZFuzRnDZhY9V6OsAKFT38U5r/view
I indexed same data in both version
d queries
> on import. The doc number repeats the sqls.
>
> "verbose-output":
> [ "entity:parent",
> ..
> [ "document#5", [
> ...
> "entity:nested1", [
> "query", "SELECT body AS nested1 FROM table WHERE p_id = '123
I am upgrading Solr 6.1.0 to Solr 8.3.0 or Solr 8.5.1.
I get performance issue for query execution in Solr 8.3.0 or Solr 8.5.1 when
values of one field is large in query and group field is apply.
My Solr URL :
https://drive.google.com/file/d/1UqFE8I6M451Z1wWAu5_C1dzqYEOGjuH2/view
My Solr
Thanks for reply.
Yes query is large but our functionality is like this. And query length is not
matter because same thing is working fine in Solr 6.1.0.
Return fields multi-valued are not a issue in my case. If I pass single return
field(fl=id) then it also takes time.(34 seconds). But if I
I am attempting to use nested entities to populate documents from
different tables and verbose/debug output is showing repeated queries
on import. The doc number repeats the sqls.
"verbose-output":
[ "entity:parent",
..
[ "document#5", [
...
"entity:nested1&q
; matches using function queries:
> > fq={!frange l=3}sum(termfreq(field, ‘barker’), termfreq(field, ‘jones’),
> > termfreq(field, ‘baker’))
> >
> > It is not perfect and you will need to handle phrases at index time to be
> > able to match phrases. Or you can combine it
check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Wed, May 13, 2020 at 10:30 AM Houston Putman wrote:
>
> Hey Vishal,
>
> That's quite a large query. But I think
Hey Vishal,
That's quite a large query. But I think the problem might be completely
unrelated. Are any of the return fields multi-valued? There was a major bug
(SOLR-14013 <https://issues.apache.org/jira/browse/SOLR-14013>) in
returning multi-valued fields that caused trivial queries t
I am upgrading Solr 6.1.0 to Solr 8.3.0.
I have created 2 shards and one form collection in Solr 8.3.0. My schema file
of form collection is same as Solr 6.1.0. Also Solr config file is same.
I am executing below URL
http://193.268.300.145:8983/solr/forms/select?q=(+(doctype:Apps AND
While java.lang.NullPointerException seems odd. Overall system behavior
seems sane. Overloaded system might not accept incoming connections, and it
triggers exception on the client side.
Overall, please add more details, like serverside logs or so, so far it's
not clear.
On Wed, May 13, 2020 at
check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Tue, May 12, 2020 at 5:37 PM Phill Campbell
wrote:
>
> Upon examining the Solr source code it appears that it was unable to
Upon examining the Solr source code it appears that it was unable to even make
a connection in the time allowed.
While the error message was a bit confusing, I do understand what it means.
> On May 12, 2020, at 2:08 PM, Phill Campbell
> wrote:
>
>
>
>
org.apache.solr.client.solrj.SolrServerException: Time allowed to handle this
request exceeded:…
at
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
at
Suppose I have two phone numbers P1 and P2 and the number of records with
P1 are X and with P2 are 2X (2 times X) respectively. If I query for R rows
for P1 and P2, the QTime in case of P2 is more. I am not specifying any
sort parameter and the number of rows I'm asking for is same in both
ilter on number of
> matches using function queries:
> fq={!frange l=3}sum(termfreq(field, ‘barker’), termfreq(field, ‘jones’),
> termfreq(field, ‘baker’))
>
> It is not perfect and you will need to handle phrases at index time to be
> able to match phrases. Or you can combine it with
’))
It is not perfect and you will need to handle phrases at index time to be able
to match phrases. Or you can combine it with some other query to filter out
unwanted results and use this approach to make sure frequencies match.
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr
kes it possible to say that a
certain minimum number of those clauses must match. The DisMax query parser
offers great flexibility in how the minimum number can be specified.
We did try doing a query and the results that came back were reflective
only of minimum number of phrases matching as oppos
Hi,
Did you happen to look into :
https://lucene.apache.org/solr/guide/6_6/the-dismax-query-parser.html#TheDisMaxQueryParser-Themm_MinimumShouldMatch_Parameter
I believe 6.5.1 has it too.
I hope it should help.
On Wed, May 6, 2020 at 6:46 PM Russell Bahr wrote:
> Hi SOLR team,
>
Hi SOLR team,
I have been asked if there is a way to return results only if those results
match a minimum number of times present in the query.
( queries looking for a minimum amount of mentions for a particular
term/phrase. Ie must be mentioned 'x' amount of times to return results
The easiest way to answer questions like this is an under-appreciated parameter
“explainOther” when submitted with “debug=true”. It’ll return an explanation of
how the doc identified by the “explainOther” parameter was scored.
See: https://lucene.apache.org/solr/guide/8_1/common-query
I'm running the following query:
id:COLLECT2601697594_T496 AND (person:[80 TO 100])
That returns 1 hit.
The following query also returns the same hit:
id:COLLECT2601697594_T496 AND ((POP16_Rez1:blue_Sky AND POP16_Sc1:[80 TO
100]) OR (POP16_Rez2:blue_Sky AND POP16_Sc2:[80 TO 100
Hello all,
I noticed that solr8 parses the edismax queries differently from solr7.
the querystring and parsedquery in solr 8.4.1 are
"querystring":"(_query_:\"{!edismax qf='titles subtitles study_brief_title
abstracts abstract_background abstract_objective abstract_methods
abstract_results
I’m not sure I get the problem.
How do you “filter the records and only display those that match the filter
string”? Do you attach an fq clause to the original query? If so, the return
set _is_ the number of docs that match the filter (and the original query), and
the numFound from
I have a use case that I would think is a common one but I cannot find any help
with this use case.
I am wanting to do a query that returns a list of records that I will display
in an html table in an app. This table only displays n records of the complete
data set, but is able to page
Your original formation of the filter query has two problems:
1> you included a “+” in the value. My guess is that you misinterpreted the
URL you got back from the browser in the admin UI where a “+” is a
URL-encoded space. You’ll also see a bunch of %XX in the URL wh
Thanks Avi, it worked.
Raboah, Avi ezt írta (időpont: 2020. márc. 24., K,
11:08):
> You can do something like that if we are talking on the same filter query
> name.
>
> addFilterQuery(String.format("%s:(%s %s)", filterName, value1, value2));
>
>
> -Orig
You can do something like that if we are talking on the same filter query name.
addFilterQuery(String.format("%s:(%s %s)", filterName, value1, value2));
-Original Message-
From: Szűcs Roland
Sent: Tuesday, March 24, 2020 11:35 AM
To: solr-user@lucene.apache.org
Subject:
Hi All,
I use Solr 8.4.1 and the latest solrj client.
There is a field let's which can have 3 different values. If I use the
admin UI, I write to the fq the following: filterName:"value1"
filterName:"value2" and it is working as expected.
If I use solrJ SolrQuery.addFilterQuery method and call it
First of all, if you’re really using pre-and-postfix wildcards and those
asterisks are not just bold formatting, those are very expensive operations.
I’d suggest you investigate alternatives (like ngramming) or other alternate
ways of analyzing your input (both at indexing and query time
I am using solr 6.1.0. We have 2 shards and each has one replica. Our index
size is very large.
I find out that position of field in query will impact of performance.
If I made below query I got slow response
(doc_ref:((*KON\-N2*) )) AND (title:((*cdrl*) )) AND project_id:(2104616
Hello everyone,
I am using solr 8.3.
After I included Synonym Graph Filter in my managed-schema file, I have
noticed that if the query string contains a multi-word synonym, it
considers that multi-word synonym as a single term and does not break it,
further suppressing the default search
gt; On 3/16/20, 10:49 AM, "atin janki" wrote:
> >
> > Hello everyone,
> >
> > I am using solr 8.3.
> >
> > After I included Synonym Graph Filter in my managed-schema file,
> I
>
(sow = split on whitespace) because we WANT multi-token
> synonyms
> > retained as multiple tokens.
> >
> > On 3/16/20, 10:49 AM, "atin janki" wrote:
> >
> > Hello everyone,
> >
> > I am using solr 8.3.
> >
>
s.
>
> On 3/16/20, 10:49 AM, "atin janki" wrote:
>
> Hello everyone,
>
> I am using solr 8.3.
>
> After I included Synonym Graph Filter in my managed-schema file, I
> have noticed that if the query string contain
eryone,
>
> I am using solr 8.3.
>
> After I included Synonym Graph Filter in my managed-schema file, I
> have noticed that if the query string contains a multi-word synonym,
> it considers that multi-word synonym as a single term and does not
> break it, furt
After I included Synonym Graph Filter in my managed-schema file, I
have noticed that if the query string contains a multi-word synonym,
it considers that multi-word synonym as a single term and does not
break it, further suppressing the default search behaviour.
I am using Standar
Hello everyone,
I am using solr 8.3.
After I included Synonym Graph Filter in my managed-schema file, I
have noticed that if the query string contains a multi-word synonym,
it considers that multi-word synonym as a single term and does not
break it, further suppressing the default search
How can I use the tokenizing differently?
Sent from Outlook<http://aka.ms/weboutlook>
From: Erik Hatcher
Sent: Friday, March 13, 2020 6:20 PM
To: solr-user@lucene.apache.org
Subject: Re: Query is taking a time in Solr 6.1.0
Looks like you have two, maybe
Looks like you have two, maybe three, wildcard/prefix clauses in there.
Consider tokenizing differently so you can optimize the queries to not need
wildcards - thats my first observation and suggestion.
Erik
> On Mar 13, 2020, at 05:56, vishal patel wrote:
>
> Some query
Some query is taking time in Solr 6.1.0.
2020-03-12 11:05:36.752 INFO (qtp1239731077-2513155) [c:documents s:shard1
r:core_node1 x:documents] o.a.s.c.S.Request [documents] webapp=/solr
path=/select
params={df=summary=false=id=4=0=true=doc_ref+asc,id+desc==s3.test.com:8983/solr/documents|s3r1
Many thank your solution! When I read the doc I didn't notice that the frange
accept function. Now I'm able to calculate the distance and get the result
that's only below 0.5
Again thank :)
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
GetDocListNC method in the SolrIndexSearcher class, Query is converted to
BooleanQuery when fq queries are available.
Has a call query instanceof RankQuery below, which will never be true.
The score of rerank is not correct.
Do you need to add a loop to the query type?
ProcessedFilter pf
fq={!frange u=0.5 incu=false}dist(2, v1, v2 , 0.8, )
This will return only products where distance is less than 0.5(excluded).
Do you also have requirements to display the distance only then you would
field aliasing. As this truncates search results and default sorting is
done, this
But the fq can't use the alias field call score. Anyway, is the performance
the same if I just sort the dist() function and get only 3 results?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
If you are looking to filter out all those products with greater than or
equal to 0.5, you could use
https://lucene.apache.org/solr/guide/8_4/other-parsers.html#function-range-query-parser
Function range query with the upper limit being 0.5 which could be added as
fq
Regards,
Munendra S N
I use score:dist(2, v1, v2 , 0.8, ) in fl to calculate the distance.
It gave me the result I want but I only want the result score that is below
0.5.
How would I achieve that?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
/bf9db95f218f49bac8e7971eb953a9fd9d13a2f0#diff-269ae02e56283ced3ce781cce21b3147R563
sincerely
hongtai
送信元: "Staley, Phil R - DCF"
Reply-To: "d...@lucene.apache.org"
日付: 2020年3月2日 月曜日 22:38
宛先: solr_user lucene_apache ,
"d...@lucene.apache.org"
件名: Re: strange behavior of solr query parser
Hello, Community:
I have a question about interpreting a parsed query from Debug Query.
I used Solr 8.4.1 and LuceneQueryParser.
I was learning the behavior of ManagedSynonymFilter because I was curious
about how "ManagedSynonymGraphFilter" fails to generate a graph.
So, I try to
201 - 300 of 10798 matches
Mail list logo