Joel Bernstein
> > > > http://joelsolr.blogspot.com/
> > > >
> > > >
> > > > On Thu, Apr 18, 2019 at 11:04 AM Shawn Heisey
> > > wrote:
> > > >
> > > > > On 4/18/2019 1:47 AM, David Barnett wrote:
> > > &g
> > >
> > > On Thu, Apr 18, 2019 at 11:04 AM Shawn Heisey
> > wrote:
> > >
> > > > On 4/18/2019 1:47 AM, David Barnett wrote:
> > > > > I have a large solr 7.3 collection 400m + documents.
> > > > >
> > > > >
t; > > But as Shawn mentioned the stack trace is not coming from Solr. Is
> > there
> > > > more in the logs beyond the Calcite exception?
> > > >
> > > > Joel Bernstein
> > > > http://joelsolr.blogspot.co
t; > > Joel Bernstein
> > > http://joelsolr.blogspot.com/
> > >
> > >
> > > On Thu, Apr 18, 2019 at 11:04 AM Shawn Heisey
> > wrote:
> > >
> > > > On 4/18/2019 1:47 AM, David Barnett wrote:
> > > > > I have a larg
t;
> > On Thu, Apr 18, 2019 at 11:04 AM Shawn Heisey
> wrote:
> >
> > > On 4/18/2019 1:47 AM, David Barnett wrote:
> > > > I have a large solr 7.3 collection 400m + documents.
> > > >
> > > > I’m trying to use the Solr JDBC driver to q
/2019 1:47 AM, David Barnett wrote:
> > > I have a large solr 7.3 collection 400m + documents.
> > >
> > > I’m trying to use the Solr JDBC driver to query the data but I get a
> > >
> > > java.io.IOException: Failed to execute sqlQuery 'select id
*] doesn't seem materially different compared to has_field:1. If no one
> > knows why Lucene optimizes one but not another, it's not clear whether it
> > even optimizes one to be sure.
>
> Queries using a boolean field will be even faster than the all-inclusive
> range query ...
This query is directly from a web browser to eliminate any downstream
components (we use Talend ESB to read / write data into our web
application)
Any of these queries fail -
the URL format is
http://localhost:8983/solr/data/sql?stmt=select id from data limit 10
http://localhost:8983/solr/data
/
On Thu, Apr 18, 2019 at 11:04 AM Shawn Heisey wrote:
> On 4/18/2019 1:47 AM, David Barnett wrote:
> > I have a large solr 7.3 collection 400m + documents.
> >
> > I’m trying to use the Solr JDBC driver to query the data but I get a
> >
> > java.io.IOException: Fail
On 4/18/2019 1:47 AM, David Barnett wrote:
I have a large solr 7.3 collection 400m + documents.
I’m trying to use the Solr JDBC driver to query the data but I get a
java.io.IOException: Failed to execute sqlQuery 'select id from document limit
10' against JDBC connection 'jdbc:calcitesolr
joelsolr.blogspot.com/
> >
> >
> > On Thu, Apr 18, 2019 at 7:48 AM David Barnett
> > wrote:
> >
> > > I have a large solr 7.3 collection 400m + documents.
> > >
> > > I’m trying to use the Solr JDBC driver to query the data but I get a
> &g
> I have a large solr 7.3 collection 400m + documents.
> >
> > I’m trying to use the Solr JDBC driver to query the data but I get a
> >
> >
> > java.io.IOException: Failed to execute sqlQuery 'select id from document
> > limit 10' against JDBC connection 'jdbc:
.
Queries using a boolean field will be even faster than the all-inclusive
range query ... but they require work at index time to function
properly. If you can do it this way, that's definitely preferred. I
was providing you with something that would work even without the
separate boolean field
ernstein
> http://joelsolr.blogspot.com/
>
>
> On Thu, Apr 18, 2019 at 7:48 AM David Barnett
> wrote:
>
>> I have a large solr 7.3 collection 400m + documents.
>>
>> I’m trying to use the Solr JDBC driver to query the data but I get a
>>
>>
>> java.io.IOExcep
Was the original index a Solr Cloud index?
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Apr 18, 2019 at 7:48 AM David Barnett
wrote:
> I have a large solr 7.3 collection 400m + documents.
>
> I’m trying to use the Solr JDBC driver to query the data
I have a large solr 7.3 collection 400m + documents.
I’m trying to use the Solr JDBC driver to query the data but I get a
java.io.IOException: Failed to execute sqlQuery 'select id from document limit
10' against JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id
ot;, and they do not exhibit the same
> problem. Even the similar query with only 1 shard does not have the problem.
>
>
> https://localhost:8983/solr/collection1/select?q=testing=https://localhost:8983/solr/collection1=0={categories
> : {type : terms,field : content_type,limit : 100}
wrote:
> On 4/17/2019 1:21 PM, John Davis wrote:
> > If what you describe is the case for range query [* TO *], why would
> lucene
> > not optimize field:* similar way?
>
> I don't know. Low level lucene operation is a mystery to me.
>
> I have seen first-hand that
Hi Jason,
The same problem still persist after restarting my Solr nodes. The only
time the problem didn't occur is when I disabled the basic authentication.
I have tried with a few "/select?q=*:*", and they do not exhibit the same
problem. Even the similar query with only 1 shard doe
Hello,
Is there a good way to do Solr Parent blockjoins with filter queries on
children (i.e. the results of the children query set affect the filter
on the children of the filter query)?
Solr has a convenient way of doing filter queries on the result set of
parent block joins. E.g.
|q={!parent
On 4/17/2019 1:21 PM, John Davis wrote:
If what you describe is the case for range query [* TO *], why would lucene
not optimize field:* similar way?
I don't know. Low level lucene operation is a mystery to me.
I have seen first-hand that the range query is MUCH faster than the
wildcard
If what you describe is the case for range query [* TO *], why would lucene
not optimize field:* similar way?
On Wed, Apr 17, 2019 at 10:36 AM Shawn Heisey wrote:
> On 4/17/2019 10:51 AM, John Davis wrote:
> > Can you clarify why field:[* TO *] is lot more efficient than field:*
On 4/17/2019 10:51 AM, John Davis wrote:
Can you clarify why field:[* TO *] is lot more efficient than field:*
It's a range query. For every document, Lucene just has to answer two
questions -- is the value more than any possible value and is the value
less than any possible value
ide
> instead of the JSON Facet query?
>
> Regards,
> Edwin
>
> On Wed, 17 Apr 2019 at 06:54, Zheng Lin Edwin Yeo
> wrote:
>
> > Hi Jason,
> >
> > Yes, that is correct.
> >
> > Below is the format of my security.json. I have chang
Can you clarify why field:[* TO *] is lot more efficient than field:*
On Sun, Apr 14, 2019 at 12:14 PM Shawn Heisey wrote:
> On 4/13/2019 12:58 PM, John Davis wrote:
> > We noticed a sizable performance degradation when we add certain fq
> filters
> > to the query even tho
This does indeed reduce the time. but doesn't quite do what I wanted. This
approach penalizes the docs based on "coord" factor. In other words, for a
doc with scores=5 on just one query (and nothing on others), the resulting
score would now be 5/3 since only one clause matches.
1. I
Hi,
For your info, I have enabled basic authentication and SSL in all the 3
versions, and I'm not sure if the issue is more on the authentication side
instead of the JSON Facet query?
Regards,
Edwin
On Wed, 17 Apr 2019 at 06:54, Zheng Lin Edwin Yeo
wrote:
> Hi Jason,
>
> Yes, that i
Hi Edwin,
To clarify what you're running into:
- on 7.6, this query works all the time
- on 7.7 this query works all the time
- on 8.0, this query works the first time you run it, but subsequent
runs return a 401 error?
Is that correct? It might be helpful for others if you could share
your
Hi,
I am using the below JSON Facet to retrieve the count of all the different
collections in one query.
https://localhost:8983/solr/collection1/select?q=testing=https://localhost:8983/solr/collection1,https://localhost:8983/solr/collection2,https://localhost:8983/solr/collection3,https
Thanks Saurabh And Prince, Works perfectly.
On Sun, 14 Apr 2019 at 17:21, Prince Manohar
wrote:
> Basically, you need to boost some documents low.
>
> For this, you can either use solr’s Boost Query ( bq ) or Boost Function
> (bf)
> parameter.
>
> For example in your c
On 4/13/2019 12:58 PM, John Davis wrote:
We noticed a sizable performance degradation when we add certain fq filters
to the query even though the result set does not change between the two
queries. I would've expected solr to optimize internally by picking the
most constrained fq filter first
=field1:*=field2:value
>> This will at least cause field1:* to be cached and reused if it's a common
>> pattern.
>> field1:* is slow in general for indexed fields because all terms for the
>> field need to be iterated (e.g. does term1 match doc1, does term2 match
>> doc1,
Basically, you need to boost some documents low.
For this, you can either use solr’s Boost Query ( bq ) or Boost Function (bf)
parameter.
For example in your case:-
If you want the documents with countries A and B to show last in the
result, you can use:-
bq=( country:A OR country:B )^-1
Note
fq=country :c1 OR c2 OR c3=if(termfreq (country,c2),0,1) desc
Correcting query.
On Sun 14 Apr, 2019, 3:36 PM Saurabh Sharma,
wrote:
> I would suggest to sort on the basis of condition. First find all the
> records and then sort on the basis of condition where you will be putting
>
code *BD*. What query should I use to get the above result.
>
> fq=country:???=*%3A*
>
I have a field *country*. I need to do a search in which I need to show the
search result of a country or some countries, last in the search result for
eg. country code *BD*. What query should I use to get the above result.
fq=country:???=*%3A*
for indexed fields because all terms for the
> field need to be iterated (e.g. does term1 match doc1, does term2 match
> doc1, etc)
> One can optimize this by indexing a term in a different field to turn it
> into a single term query (i.e. exists:field1)
>
> -Yonik
>
&g
field1:* to be cached and reused if it's a common
> pattern.
> field1:* is slow in general for indexed fields because all terms for the
> field need to be iterated (e.g. does term1 match doc1, does term2 match
> doc1, etc)
> One can optimize this by indexing a term in a different field t
if it's a common
pattern.
field1:* is slow in general for indexed fields because all terms for the
field need to be iterated (e.g. does term1 match doc1, does term2 match
doc1, etc)
One can optimize this by indexing a term in a different field to turn it
into a single term query (i.e. exists:field1
Hi there,
We noticed a sizable performance degradation when we add certain fq filters
to the query even though the result set does not change between the two
queries. I would've expected solr to optimize internally by picking the
most constrained fq filter first, but maybe my understanding
not
differentiate TLOG and PULL replicas. My question is, when the TLOG
replica receives an external query, will it forward to one of the PULL
replicas? Or will it send the shard request to PULL replicas but still
serve as the aggregate node for the query?
2. In the TLOG replicas, I am still seeing some
;bf" parameter for individual edismax queries.
>
> However, the reason I can't condense these edismax queries into a single
> edismax query is because each of them uses different fields in "qf".
>
> Basically what I'm trying to do is this: each of these edismax queries
I did infact use "bf" parameter for individual edismax queries.
However, the reason I can't condense these edismax queries into a single
edismax query is because each of them uses different fields in "qf".
Basically what I'm trying to do is this: each of these edismax quer
Function queries in ‘q’ score EVERY DOCUMENT. Use ‘bf’ or ‘boost’ for the
function part, so its only computed on main query matching docs.
Erik
> On Apr 9, 2019, at 03:29, Sidharth Negi wrote:
>
> Hi,
>
> I'm working with "edismax" and "function-q
Hi,
I'm working with "edismax" and "function-query" parsers in Solr and have
difficulty in understanding whether the query time taken by
"function-query" makes sense. The query I'm trying to optimize looks as
follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 a
Hi,
I'm working with "edismax" and "function-query" parsers in Solr and have
difficulty in understanding whether the query time taken by
"function-query" makes sense. The query I'm trying to optimize looks as
follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 a
Thanks for letting us know!
> On Apr 4, 2019, at 8:40 AM, rodio wrote:
>
> Hi all!
>
> It's solved!
>
> I have seen that we are using a deprecated version!
>
> Use WordDelimiterGraphFilterFactory instead of WordDelimiterFilterFactory
> solves the problem
>
>
Hi all!
It's solved!
I have seen that we are using a deprecated version!
Use WordDelimiterGraphFilterFactory instead of WordDelimiterFilterFactory
solves the problem
https://lucene.apache.org/solr/guide/7_2/filter-descriptions.html#word-delimiter-filter
Hi all,
This is my first question in this forum, i'm newbye with Solr so I would be
very glad if someone can resolve my doubt.
We are evaluating new version of Solr 8
The problem is that when we build a query using WordDelimiterFilterFactory
with preserveOriginal = 1, the parsed query has
Actually, nevermind. I found the part of the upgrade to 7 that was missed
" The sow (split-on-whitespace) request param now defaults to false (true
in previous versions). This affects the edismax and standard/"lucene" query
parsers: if the sow param is not specified, query text wi
Hi all,
We have a multivalued field that has an integer at the beginning followed
by a space, and the index analyzer chain extracts that value to search on
testField:[
34 blah blah blah
27 blah blah blah
...
]
The query analyzer chain is just a keyword tokenizer factory since the
clients
OK..
The intent is to collapse on the field domain..
Here's a query that works fine and the way I want with the Collapsing query
parser..
/select?defType=dismax=score,content,description,keywords,title={!collapse%20field=domain%20nullPolicy=expand}=content^0.05%20description^0.03%20keywords
Would you be willing to share your query-time analysis chain config, and
perhaps the "debug=true" (or "debug=query") output for successful queries
of a similar nature to the problematic ones? Also, re: "only times out on
extreme queries" -- what do you co
Hi..
I'm wondering if I've found a query of death or just a really expensive
query.. It's killing my solr with OOM..
Collapsing query parser using:
fq={!collapse field=domain nullPolicy=expand}
Everything works fine using words & phrases.. However as soon as there are
numbers invo
Perhaps the Complex Phrase Query Parser might be what you are looking for.
https://lucene.apache.org/solr/guide/7_3/other-parsers.html
//
On 3/25/19, 1:41 AM, "krishan goyal" wrote:
Hi,
I want to execute a solr query with boolean clauses using the eDismax Query
Hi,
I want to execute a solr query with boolean clauses using the eDismax Query
Parser.
But the phrase match is executed on the complete query and not on the
individual queries which are created.
Is it possible to have both boolean conditions in query and phrase matches ?
Eg:
Query -
(gear
defined in my schema as:
I am using a query containing a Range clause and I am using highlighting to get
the list of values that match the range query.
All examples below were using the appropriate Solr Admin Server Query page.
The range query using Solr v5.1.0 produces CORREC
On 3/22/2019 7:52 AM, Rajdeep Sahoo wrote:
My solr query sometime taking more than 60 sec to return the response .
Is there any way I can check why it is taking so much time .
Please let me know if there is any way to analyse this issue(high
response time ) .Thanks
With the information
Hi all,
My solr query sometime taking more than 60 sec to return the response .
Is there any way I can check why it is taking so much time .
Please let me know if there is any way to analyse this issue(high
response time ) .Thanks
I am trying working off of https://wiki.apache.org/solr/SolJSON tutorial. I
have put my url for solr in the code, copied from solr admin query result to
make sure the query should return something.
I try typing in "title:Asian" into text box but when the button is hit, textbox
j
other query syntax e.g. bbox query parser to see if the
problem goes away? I doubt this is it but you seem to point to the syntax
being related.
~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley
On Mon, Mar 18, 2019 at 12:24 AM Mitchell Bösecke
Hi,
I've never used the LTR module, but I suspect I might know what the error
is. I think that the "query" Function Query has parsing limitations on
what you pass to it. At least it used to. Try to put the embedded query
onto another parameter and then refer to it with a dollar-
quot;*internet of
> >
> things*"=edismax=spellcontent=json=1=score,internet_of_things:query({!edismax
> > v='"*internet of things*"'}),instant_of_things:query({!edismax
> v='"instant
> > of things"'})
> >
> >
> > Response c
83/solr/SCSpell/select?q="*internet of
> things*"=edismax=spellcontent=json=1=score,internet_of_things:query({!edismax
> v='"*internet of things*"'}),instant_of_things:query({!edismax v='"instant
> of things"'})
>
>
> Response contains score fr
Hello Experts,
My goal is to understand the time complexity of the Boosting Query as part
of a search in Solr:
sort=score+desc
defType=edismax
boost=
I followed the stacktrace for the the search call and I am believe the time
complexity is as follows:
- Main Query time + multiple filters
quot;*internet of
>
> things*"=edismax=spellcontent=json=1=score,internet_of_things:query({!edismax
> v='"*internet of things*"'}),instant_of_things:query({!edismax v='"instant
> of things"'})
>
>
> Response contains score from function query
>
> "
Hello Experts,
My goal is to understand the time complexity of the Boosting Query as part
of a search in Solr:
sort=score+desc
defType=edismax
boost=
I followed the stacktrace for the the search call and I am believe the time
complexity is as follows:
- Main Query time + multiple filters
Response contains score from function query
"fl":"score,internet_of_things:query({!edismax v='\"internet of
things\"'}),instant_of_things:query({!edismax v='\"instant of things\"'})",
"rows":"1",
"wt":"json&q
Hi everyone,
I'm trying to index geodetic polygons and then query them out using an
arbitrary rectangle. When using the Geo3D spatial context factory, the data
indexes just fine but using a range query (as per the solr documentation)
does not seem to filter the results appropriately (I get all
a feature in the feature store, I received a
"failed to parse feature query" error, and thus I am using the below
geofilt query for distance.
{
"name":"dist",
"class":"org.apache.solr.ltr.feature.SolrFeature",
"params":{"q
Hello,
Due to reading 'This filter must be included on index-time analyzer..' in the
documentation, i never considered adding it to a query-time analyser.
However, we had problems with a set of three two-word synonyms never yielding
the same number of results with SynonymGraph. When switching
Hi all,
starting to work on a new project I've found a boost query configured
in a Solr requesthandler.
I would show you how this boost query because I'm interested to have
some suggestions or advice in what are the benefits/drawbacks of this
solution which is new to me.
So let's say that every
Dear reader, I've found an different solution for my problem
and don't need a depth dependent score anymore.
Kind regards, Jochen
Am 19.02.19 um 14:42 schrieb Jochen Barth:
Dear reader,
I'll have a hierarchical graph "like a book":
{ id:solr_doc1; title:book }
{ id:solr_doc2; title:chapter;
lues (not familiar with this
> code and just assuming) so in any case it will use “shared” OS caches. Those
> caches will be affected when loading stored fields to do partial update. Also
> it’ll take some memory when indexing documents. That is why storing and doing
> partial up
to achieve decompression.
I believe that if you have both stored data and docValues for a field,
Solr will use stored data for search results. I am not positive that
this is the case, but I think it's what happens.
2)What's the impact of large stored fields (.fdt) on query time
performance. Do query
query performances. But that might be
insignificant and only test can tell for sure. Unless you have small index and
enough RAM, then I can also tell that for sure.
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training -
Hi Emir,
I had this question in my mind if I store my only returnable field as
docValue in RAM.will my stored documents be referenced while constructing
the response after the query. Ideally, as the field asked to return i.e fl
is already in RAM then documents on disk should not be consulted
Hi Saurabh,
Welcome to the channel!
Storing fields should not affect query performances directly if you use lazy
field loading and it is the default set. And it should not affect at all if you
have enough RAM compared to index size. Otherwise OS caches might be affected
by stored fields
stored? Lets say i have an "id"
field and i do have doc value true for the field, will solr use stored
fields in this case? will it load whole document in RAM ?
2)What's the impact of large stored fields (.fdt) on query time
performance. Do query time even depend on the stored field or
Ah... I think there are two issues likely at play here. One is LUCENE-8531
<https://issues.apache.org/jira/browse/LUCENE-8531>, which reverts a bug
related to SpanNearQuery semantics, causing possible query paths to be
enumarated up front. Setting ps=0 (although perhaps not appropriate fo
FWIW: we have also seen serious Query of Death issues after our upgrade to
Solr 7.6. Are there any open issues we can watch? Is Markus' findings
around `pf` our best guess? We've seen these issues even with ps=0. We also
use the WDF.
On Fri, Feb 22, 2019 at 8:58 AM Markus Jelsma
wrote:
> He
Hello Michael,
Sorry it took so long to get back to this, too many things to do.
Anyway, yes, we have WDF on our query-time analysers. I uploaded two log files,
both the same query of death with and without synonym filter enabled.
https://mail.openindex.io/export/solr-8983-console.log 23 MB
Can you share your config file and use case ?
Its difficult to guess how you have configured the component.
Regards,
Rohan Kasat
On Wed, Feb 20, 2019 at 12:21 AM Prasad_sarada
wrote:
> Hi,
> I want to implement solr auto correct feature, i have tried doing the spell
> check one but not getting
Hi,
I want to implement solr auto correct feature, i have tried doing the spell
check one but not getting satisfying result. it's showing the top suggestion
but not giving the result of the correct word.
ex:if i am searching for "procesor" then i should get the result of
"processor" coz the second
Dear reader,
I'll have a hierarchical graph "like a book":
{ id:solr_doc1; title:book }
{ id:solr_doc2; title:chapter; parent_ids: solr_doc1 }
{ id:solr_doc3; title:subchapter; parent_ids: solr_doc2 }
etc.
Now to match all docs with "title" and "chapter" I could do:
+_query_:"{!graph
Oh yeah, my pet peeve. This is the cure.
(*:* AND -department_name:[* TO *]) OR {!tag=department_name terms
f=department_name v='Kirurgisk avdeling'}
no comments.
On Wed, Feb 13, 2019 at 1:49 PM Andreas Lønes wrote:
> I am experiencing some weird behaviour when using terms query parser wh
I am experiencing some weird behaviour when using terms query parser where I am
filtering on documents that has no value for a given field(null) and strings
with whitespaces.
I can filter on documents not having a value OR having some specific values for
the field as long as the value does
Hi Markus,
As of 7.6, LUCENE-8531 <https://issues.apache.org/jira/browse/LUCENE-8531>
reverted a graph/Spans-based phrase query implementation (introduced in 6.5
-- LUCENE-7699 <https://issues.apache.org/jira/browse/LUCENE-7699>) to an
implementation that builds a separate phrase qu
Hello (apologies for cross-posting),
While working on SOLR-12743, using 7.6 on two nodes and 7.2.1 on the remaining
four, we stumbled upon a situation where the 7.6 nodes quickly succumb when a
'Query-of-Death' is issued, 7.2.1 up to 7.5 are all unaffected (tested and
confirmed).
Following
Hi Julia,
Keep in mind that in order to facet on child document fields you'll need to
use the block join facet component:
https://lucene.apache.org/solr/guide/7_4/blockjoin-faceting.html
For the query itself you probably need to specify each required attribute
value, but looks like you're
Whats' your current query? It's probably a question of building boolean
query by combining Solr queries.
Note, this datamodel might be a little bit overwhelming, So, if number of
distinct attributename values is around a thousand, just handle it via
dynamic field without nesting docs:
brass
as article 4711 because in this
article the two words appear in the description. But the result of my query is
always only article 4711. I know that I could also write the attribute in one
field. However, I want to have a facet about the attribute name.
I hope you can help me with this problem
On 1/31/2019 12:11 PM, Ruchir Choudhry wrote:
Wanted to start working on Solr bugs, will appreciate if you or some can
allocate me with some minor bugs.
It doesn't work like that. Issues are not handed out, it's a strictly
volunteer system.
You'll need to find the issues you want to work
SOLR_OPTS="$SOLR_OPTS -Dlucene.cms.override_spins=false
> -Dlucene.cms.override_core_count=4"
> >
> > On the cluster we created a collection with 5 shards each with 2
> replicas for a total of 10 replicas.
> >
> > The full index size is less than 2 GB a
Hi Erick,
first of all thanks a lot for your response!
I suppose that in my case is happening exactly what you describe as "GC Hell"
because I see continuous GC cycles and solr is not showing OOM errors.
I absolutely agree with you that this is a bad query but I was wondering if
th
th 5 shards each with 2 replicas for
> a total of 10 replicas.
>
> The full index size is less than 2 GB and under normal usage the used heap
> space is between 200MB and 500MB.
>
> Unfortunately if we try to perform a query like the this:
>
> .../select?q=*:*=ActionType:MAILOP
ace is between 200MB and 500MB.
Unfortunately if we try to perform a query like the this:
.../select?q=*:*=ActionType:MAILOPENING=true=0=FIELD_ObjectId,FIELD_MailId_ObjectId.facet.pivot.mincount=0_ObjectId.facet.limit=-1_ObjectId.facet.pivot.mincount=0_ObjectId.facet.limit=-1
where FIELD_ObjectId and F
have opened the SOLR admin UI on IE11 and run a query (*:*) against the
techproducts core. If I re-execute exactly the same query from the UI by
re-pressing the
"execute query" button the results are exactly the same ( including the QTime
value). Running IE11 in debug mode (F12) wi
have opened the SOLR admin UI on IE11 and run a query (*:*) against the
techproducts core. If I re-execute exactly the same query from the UI by
re-pressing the
"execute query" button the results are exactly the same ( including the QTime
value). Running IE11 in debug mode (F12) wi
Hi Shruti,
Solr clusters should NOT span regions, so when a query hits a particular
cluster in a region that query should be handled by nodes in that region
and not forwarded to another. My recommendation is to check out
cross-datacenter replication and route requests to the correct region
701 - 800 of 10798 matches
Mail list logo