We have the following setup , solr 7.7.2 with 1 TLOG Leader & 1 TLOG
replica with a single shard. We have about 34.5 million documents with an
approximate index size of 600GB. I have noticed a degraded query
performance whenever the replica is trying to (guessing here) sync or
perform ac
our query
performance .?
With the information available, the only suggestion I have currently is
to replace "q=*" with "q=*:*" -- assuming that the intent is to match
all documents with the main query. According to what you attached
(which I am very surprised to see -- attac
On 7/8/2019 3:08 AM, Midas A wrote:
I have enabled docvalues on facet field but query is still taking time.
How i can improve the Query time .
docValues="true" multiValued="true" termVectors="true" />
*Query: *
There's very little information here -- only a single field definition
and
Hi
How i can know whether DocValues are getting used or not ?
Please help me here .
On Mon, Jul 8, 2019 at 2:38 PM Midas A wrote:
> Hi ,
>
> I have enabled docvalues on facet field but query is still taking time.
>
> How i can improve the Query time .
> docValues="true" multiValued="true"
Hi ,
I have enabled docvalues on facet field but query is still taking time.
How i can improve the Query time .
*Query: *
http://X.X.X.X:
FYI
https://issues.apache.org/jira/browse/SOLR-11437
https://issues.apache.org/jira/browse/SOLR-12488
On Thu, Apr 18, 2019 at 7:24 AM Shawn Heisey wrote:
> On 4/17/2019 11:49 PM, John Davis wrote:
> > I did a few tests with our instance solr-7.4.0 and field:* vs field:[* TO
> > *] doesn't seem
On 4/17/2019 11:49 PM, John Davis wrote:
I did a few tests with our instance solr-7.4.0 and field:* vs field:[* TO
*] doesn't seem materially different compared to has_field:1. If no one
knows why Lucene optimizes one but not another, it's not clear whether it
even optimizes one to be sure.
I did a few tests with our instance solr-7.4.0 and field:* vs field:[* TO
*] doesn't seem materially different compared to has_field:1. If no one
knows why Lucene optimizes one but not another, it's not clear whether it
even optimizes one to be sure.
On Wed, Apr 17, 2019 at 4:27 PM Shawn Heisey
On 4/17/2019 1:21 PM, John Davis wrote:
If what you describe is the case for range query [* TO *], why would lucene
not optimize field:* similar way?
I don't know. Low level lucene operation is a mystery to me.
I have seen first-hand that the range query is MUCH faster than the
wildcard
If what you describe is the case for range query [* TO *], why would lucene
not optimize field:* similar way?
On Wed, Apr 17, 2019 at 10:36 AM Shawn Heisey wrote:
> On 4/17/2019 10:51 AM, John Davis wrote:
> > Can you clarify why field:[* TO *] is lot more efficient than field:*
>
> It's a
On 4/17/2019 10:51 AM, John Davis wrote:
Can you clarify why field:[* TO *] is lot more efficient than field:*
It's a range query. For every document, Lucene just has to answer two
questions -- is the value more than any possible value and is the value
less than any possible value. The
Can you clarify why field:[* TO *] is lot more efficient than field:*
On Sun, Apr 14, 2019 at 12:14 PM Shawn Heisey wrote:
> On 4/13/2019 12:58 PM, John Davis wrote:
> > We noticed a sizable performance degradation when we add certain fq
> filters
> > to the query even though the result set
On 4/13/2019 12:58 PM, John Davis wrote:
We noticed a sizable performance degradation when we add certain fq filters
to the query even though the result set does not change between the two
queries. I would've expected solr to optimize internally by picking the
most constrained fq filter first,
Patches welcome, but how would that be done? There’s no fixed schema at the
Lucene level. It’s even possible that no two documents in the index have any
fields in common. Given the structure of an inverted index, answering the
question “for document X does it have any value?" is rather
> field1:* is slow in general for indexed fields because all terms for the
> field need to be iterated (e.g. does term1 match doc1, does term2 match
> doc1, etc)
This feels like something could be optimized internally by tracking
existence of the field in a doc instead of making users index yet
Also note that field1:* does not necessarily match all documents. A document
without that field will not match. So it really can’t be optimized they way you
might expect since, as Yonik says, all the terms have to be enumerated….
Best,
Erick
> On Apr 13, 2019, at 12:30 PM, Yonik Seeley wrote:
More constrained but matching the same set of documents just guarantees
that there is more information to evaluate per document matched.
For your specific case, you can optimize fq = 'field1:* AND field2:value'
to =field1:*=field2:value
This will at least cause field1:* to be cached and reused if
Hi there,
We noticed a sizable performance degradation when we add certain fq filters
to the query even though the result set does not change between the two
queries. I would've expected solr to optimize internally by picking the
most constrained fq filter first, but maybe my understanding is
Hi all,
We would like to perform a benchmark of
https://issues.apache.org/jira/browse/SOLR-11831
The patch improves the performance of grouped queries asking only for one
result per group (aka. group.limit=1).
I remember seeing a page showing a benchmark of the query performance on
Wikipedia
Thanks everyone for taking time to respond to my email. I think you are
correct in that the query results might be coming from main memory as I
only had around 7k queries.
However it is still not clear to me, given that everything was being
served from main memory, why is that I am not able to
On 4/28/2017 12:43 PM, Toke Eskildsen wrote:
> Shawn Heisey wrote:
>> Adding more shards as Toke suggested *might* help,[...]
> I seem to have phrased my suggestion poorly. What I meant to suggest
> was a switch to a single shard (with 4 replicas) setup, instead of the
>
Beautiful, thank you.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 3:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use the JMeter plugins. They’ve been reorganized recently, so
Davis, Daniel (NIH/NLM) [C]
> <daniel.da...@nih.gov> wrote:
>
> Walter,
>
> If you can share a pointer to that JMeter add-on, I'd love it.
>
> -Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: Friday, April 28, 2017 2:53 PM
> To: sol
Walter,
If you can share a pointer to that JMeter add-on, I'd love it.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 2:53 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use production logs
I use production logs to get a mix of common and long-tail queries. It is very
hard to get a realistic distribution with synthetic queries.
A benchmark run goes like this, with a big shell script driving it.
1. Reload the collection to clear caches.
2. Split the log into a cache warming set
Shawn Heisey wrote:
> Adding more shards as Toke suggested *might* help,[...]
I seem to have phrased my suggestion poorly. What I meant to suggest was a
switch to a single shard (with 4 replicas) setup, instead of the current 2
shards (with 2 replicas).
- Toke
Well, the best way to get no cache hits is to set the cache sizes to
zero ;). That provides worst-case scenarios and tells you exactly how
much you're relying on caches. I'm not talking the lower-level Lucene
caches here.
One thing I've done is use the TermsComponent to generate a list of
terms
(aside: Using Gatling or Jmeter?)
Question: How can you easily randomize something in the query so you get no
cache hits? I think there are several levels of caching.
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
re: the q vs. fq question. My claim (not verified) is that the fastest
of all would be q=*:*={!cache=false}. That would bypass the scoring
that putting it in the "q" clause would entail as well as bypass the
filter cache.
But I have to agree with Walter, this is very suspicious IMO. Here's
what
More “unrealistic” than “amazing”. I bet the set of test queries is smaller
than the query result cache size.
Results from cache are about 2 ms, but network communication to the shards
would add enough overhead to reach 40 ms.
wunder
Walter Underwood
wun...@wunderwood.org
On 4/27/2017 5:20 PM, Suresh Pendap wrote:
> Max throughput that I get: 12000 to 12500 reqs/sec
> 95 percentile query latency: 30 to 40 msec
These numbers are *amazing* ... far better than I would have expected to
see on a 27GB index, even in a situation where it fits entirely into
available
On Thu, 2017-04-27 at 23:20 +, Suresh Pendap wrote:
> Number of Solr Nodes: 4
> Number of shards: 2
> replication-factor: 2
> Index size: 55 GB
> Shard/Core size: 27.7 GB
> maxConnsPerHost: 1000
The overhead of sharding is not trivial. Your overall index size is
fairly small, relative to
Hi,
I am trying to perform Solr Query performance benchmarking and trying to
measure the maximum throughput and latency that I can get from.a given Solr
cluster.
Following are my configurations
Number of Solr Nodes: 4
Number of shards: 2
replication-factor: 2
Index size: 55 GB
Shard/Core size
Thanks a lot Shawn.
Regards,
Prateek Jain
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: 23 December 2016 01:36 PM
To: solr-user@lucene.apache.org
Subject: Re: DataImportHandler | Query | performance
On 12/23/2016 5:15 AM, Prateek Jain J wrote:
> We n
On 12/23/2016 5:15 AM, Prateek Jain J wrote:
> We need some advice/views on the way we push our documents in SOLR (4.8.1).
> So, here are the requirements:
>
> 1. Document could be from 5 to 100 KB in size.
>
> 2. 10-50 users actively querying solr with different sort of data.
>
> 3.
Hi All,
We need some advice/views on the way we push our documents in SOLR (4.8.1). So,
here are the requirements:
1. Document could be from 5 to 100 KB in size.
2. 10-50 users actively querying solr with different sort of data.
3. Data will be available frequently to be
On Mon, 2016-11-14 at 11:36 +0530, Midas A wrote:
> How to improve facet query performance
1) Don't shard unless you really need to. Replicas are fine.
2) If the problem is the first facet call, then enable DocValues and
re-index.
3) Keep facet.limit <= 100, especially if you shard.
an
How to improve facet query performance
Good tip Rick,
I'll dig in and make sure everything is set up correctly.
Thanks!
-D
Dave Seltzer
Chief Systems Architect
TVEyes
(203) 254-3600 x222
On Wed, Nov 2, 2016 at 9:05 PM, Rick Leir wrote:
> Here is a wild guess. Whenever I see a 5 second
Here is a wild guess. Whenever I see a 5 second delay in networking, I
think DNS timeouts. YMMV, good luck.
cheers -- Rick
On 2016-11-01 04:18 PM, Dave Seltzer wrote:
Hello!
I'm trying to utilize Solr Cloud to help with a hash search problem. The
record set has only 4,300 documents.
When I
Hello!
I'm trying to utilize Solr Cloud to help with a hash search problem. The
record set has only 4,300 documents.
When I run my search against a single core I get results on the order of
10ms. When I run the same search against Solr Cloud results take about
5,000 ms.
Is there something about
Hi
I have a few filter queries that use multiple cores join to filter
documents. After I inverted those joins they became slower. So, it looks
something like that:
I used to query "product" core with query that contains fq={!join to=tags
from=preferred_tags fromIndex=user}(country:US AND
terms
f=permissions v=A,B}
>
> Last week, I tried to re-index the whole collection from scratch, using
source data. Query performance on the resulting re-index proved to be abysmal,
I could get barely 10% of my previous query throughput, and even that was at
latencies th
he whole collection from scratch, using source
data. Query performance on the resulting re-index proved to be abysmal, I could
get barely 10% of my previous query throughput, and even that was at latencies
that were orders of magnitude higher than what I had in production.
I hooked up some CPU profil
the whole collection from scratch, using source
data. Query performance on the resulting re-index proved to be abysmal, I could
get barely 10% of my previous query throughput, and even that was at latencies
that were orders of magnitude higher than what I had in production.
I hooked up some CPU
ario it would be 30L-50L per shard. I want to search document from
>> all shards, it will slow down and take too long time.
>>
>> I know in case of solr Cloud, it will query all shard node and then return
>> result. Is there any way to search document in all shard with bes
ormance(qps)
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287763.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
. Is there any way to search document in all shard with best
performance(qps)
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287763.html
Sent from the Solr - User mailing list archive at Nabble.com.
rding since you
> mentioned only 10K records in one shard. What's your index/document size?
>
> Thanks,
> Susheel
>
> On Mon, Jul 18, 2016 at 2:08 AM, kasimjinwala <jinwala.ka...@gmail.com>
> wrote:
>
>> currently I am using solrCloud 5.0 and I am facing query performance
5.0 and I am facing query performance issue
> while using 3 implicit shards, each shard contain around 10K records.
> when I am specifying shards parameter(*shards=shard1*) in query it gives
> 30K-35K qps. but while removing shards parameter from query it give
> *1000-1500qps*.
currently I am using solrCloud 5.0 and I am facing query performance issue
while using 3 implicit shards, each shard contain around 10K records.
when I am specifying shards parameter(*shards=shard1*) in query it gives
30K-35K qps. but while removing shards parameter from query it give
*1000
ey
>> > taking. In order to measure query speed I am using solrmeter with 50k
>> > unique filter queries. And then checking if any of the queries are slower
>> > than 50ms. Is this a good approach to measure query performance?
>> >
>> > Are there any guidelines
jspothar...@gmail.com>
> wrote:
> > Hi,
> > I am trying to measure how will are queries performing ie how long are
> they
> > taking. In order to measure query speed I am using solrmeter with 50k
> > unique filter queries. And then checking if any of the queries are slower
>
g solrmeter with 50k
> unique filter queries. And then checking if any of the queries are slower
> than 50ms. Is this a good approach to measure query performance?
>
> Are there any guidelines on how to measure if a given instance can handle a
> given number of qps(query per sec)? For exa
Hi,
I am trying to measure how will are queries performing ie how long are they
taking. In order to measure query speed I am using solrmeter with 50k
unique filter queries. And then checking if any of the queries are slower
than 50ms. Is this a good approach to measure query performance
On 4/18/2016 5:06 AM, Mugeesh Husain wrote:
> 1.)solr normal query(q=*:*) vs facet query(facet.query="abc") ?
> 2.)solr normal query(q=*:*) vs facet
> search(facet=tru=coullumn_name) ?
> 3.)solr filter query(q=Column:some value) vs facet query(facet.query="abc")
> ?
> 4.)solr normal query(q=*:*)
abc")
?
4.)solr normal query(q=*:*) vs filter query(q=column:some value) ?
Also provide some good tutorial for above these things.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/normal-solr-query-vs-facet-query-performance-tp4270907.html
Sent from the Solr
some environments."
Thanks & Regards,
Bhaumik Joshi
From: billnb...@gmail.com <billnb...@gmail.com>
Sent: Monday, April 11, 2016 7:07 AM
To: solr-user@lucene.apache.org
Subject: Re: Soft commit does not affecting query performance
Why do you think i
Why do you think it would ?
Bill Bell
Sent from mobile
> On Apr 11, 2016, at 7:48 AM, Bhaumik Joshi <bjo...@asite.com> wrote:
>
> Hi All,
>
> We are doing query performance test with different soft commit intervals. In
> the test with 1sec of soft commit interval
Hi All,
We are doing query performance test with different soft commit intervals. In
the test with 1sec of soft commit interval and 1min of soft commit interval we
didn't notice any improvement in query timings.
We did test with SolrMeter (Standalone java tool for stress tests with Solr
for two properties: DateDep and
Duration since the definition of docValues=true for integer type did not
work with faceted search. There was a time I accidentally used filter query
with the string type property and I found the query performance degraded
quite a lot.
Is it generally true that fq works
of the filterCache, can I limit the size of the three
caches so that the RAM usage will be under control?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223960.html
Sent from the Solr - User mailing list archive
/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223960.html
Sent from the Solr - User mailing list archive at Nabble.com.
with the string type property and I found the query performance degraded
quite a lot.
Is it generally true that fq works better with integer type ?
If this is the case, I could create two integer type properties for two
other fq to check if I can boost the performance.
Thanks
--
View
usage will be under control?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223960.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223988.html
Sent from the Solr - User mailing list archive at Nabble.com.
=512
autowarmCount=32/
documentCache
class=solr.LRUCache
size=1
initialSize=256
autowarmCount=0/
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699.html
Sent from
after indexing the data (to take advantage of cache warming).
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223744.html
Sent from the Solr - User mailing list archive at Nabble.com.
after indexing the data (to take advantage of cache warming).
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223744.html
Sent from the Solr - User mailing list archive at Nabble.com.
cache etc, can I turn off the three cache and send a lot of queries to Solr
before I start to test the performance of each individual queries?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223758
before I start to test the performance of each individual queries?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699p4223758.html
Sent from the Solr - User mailing list archive at Nabble.com.
=solr.LRUCache
size=1
initialSize=256
autowarmCount=0/
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-a-good-query-performance-with-this-data-size-tp4223699.html
Sent from the Solr - User mailing list archive
Any recommended tool to test the query performance would be of great help.
Thanks
, it will help you a lot !
Cheers
2015-07-21 16:49 GMT+01:00 Nagasharath sharathrayap...@gmail.com:
Any recommended tool to test the query performance would be of great help.
Thanks
--
--
Benedetti Alessandro
Visiting card - http://about.me/alessandro_benedetti
SolrMeter mate,
http://code.google.com/p/solrmeter/
Take a look, it will help you a lot !
Cheers
2015-07-21 16:49 GMT+01:00 Nagasharath sharathrayap...@gmail.com:
Any recommended tool to test the query performance would be of great help.
Thanks
--
--
Benedetti
GC is operating the way I think it should but I am lacking memory. I am
just surprised because indexing is performing fine (documents going in) but
deletions are really bad (documents coming out).
Is it possible these deletes are hitting many segments, each of which I
assume must be re-built?
I have a collection with 1 billion documents and I want to delete 500 of
them. The collection has a dozen shards and a couple replicas. Using Solr
4.4.
Sent the delete query via HTTP:
http://hostname:8983/solr/my_collection/update?stream.body=
deletequerysource:foo/query/delete
Took a couple
On 5/20/2015 5:41 PM, Ryan Cutter wrote:
I have a collection with 1 billion documents and I want to delete 500 of
them. The collection has a dozen shards and a couple replicas. Using Solr
4.4.
Sent the delete query via HTTP:
http://hostname:8983/solr/my_collection/update?stream.body=
On 5/20/2015 5:57 PM, Ryan Cutter wrote:
GC is operating the way I think it should but I am lacking memory. I am
just surprised because indexing is performing fine (documents going in) but
deletions are really bad (documents coming out).
Is it possible these deletes are hitting many
Shawn, thank you very much for that explanation. It helps a lot.
Cheers, Ryan
On Wed, May 20, 2015 at 5:07 PM, Shawn Heisey apa...@elyograg.org wrote:
On 5/20/2015 5:57 PM, Ryan Cutter wrote:
GC is operating the way I think it should but I am lacking memory. I am
just surprised because
We currently have a SolrCloud cluster that contains two collections which we
toggle between for querying and indexing. When bulk indexing to our “offline
collection, our query performance from the “online” collection suffers
somewhat. When segment merges occur, it gets downright abysmal. We
have a SolrCloud cluster that contains two collections which
we toggle between for querying and indexing. When bulk indexing to our
“offline collection, our query performance from the “online” collection
suffers somewhat. When segment merges occur, it gets downright abysmal. We
have adjusted
Hi,
We recently saw a behavior which I wanted to confirm, WE are using solrj to
query solr. From the code, we use HttpSolrServer to hit the query and
return the response
1. When a sample query is hit using Solrj, we get the QTime as 4seconds.
The same query when we hit against solr in the
Hi Parsi,
Are you sure you are using the same exact parameters? I would include
enhoParams=all and compare parameters. Only wt parameter would be different.
wt=javabin for solrJ
On Thursday, November 28, 2013 11:42 AM, Prasi S prasi1...@gmail.com wrote:
Hi,
We recently saw a behavior which
On 11/28/2013 3:01 AM, Ahmet Arslan wrote:
Are you sure you are using the same exact parameters? I would include
enhoParams=all and compare parameters. Only wt parameter would be different.
wt=javabin for solrJ
You can also look at the Solr log, which if you are logging at the
normal level
Ah, got it now - thanks for the explanation.
On Sat, Sep 28, 2013 at 3:33 AM, Upayavira u...@odoko.co.uk wrote:
The thing here is to understand how a join works.
Effectively, it does the inner query first, which results in a list of
terms. It then effectively does a multi-term query with
The thing here is to understand how a join works.
Effectively, it does the inner query first, which results in a list of
terms. It then effectively does a multi-term query with those values.
q=size:large {!join fromIndex=other from=someid
to=someotherid}type:shirt
Imagine the inner join
Hi Joel,
I tried this patch and it is quite a bit faster. Using the same query on a
larger index (500K docs), the 'join' QTime was 1500 msec, and the 'hjoin'
QTime was 100 msec! This was for true for large and small result sets.
A few notes: the patch didn't compile with 4.3 because of the
It looks like you are using int join keys so you may want to check out
SOLR-4787, specifically the hjoin and bjoin.
These perform well when you have a large number of results from the
fromIndex. If you have a small number of results in the fromIndex the
standard join will be faster.
On Wed, Sep
I'm doing a cross-core join query and the join query is 30X slower than
each of the 2 individual queries. Here are the queries:
Main query: http://localhost:8983/solr/mainindex/select?q=title:java
QTime: 5 msec
hit count: 1000
Sub query: http://localhost:8983/solr/subindex/select?q=+fld1:[0.1 TO
.
Thanks for looking into this. Appreciate your help.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, August 13, 2013 8:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 update and query performance question
1 That's hard-coded at present
if it
gives us desired performance.
Thanks for looking into this. Appreciate your help.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, August 13, 2013 8:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 update and query performance
do hard commit after loading 1000 documents. For every hard
commit, it refreshes searcher on all nodes. Are all caches also refreshed
when hard commit happens? We're planning to change to soft commit and do
auto hard commit every 10-15 minutes.
3. We're not seeing improved query performance
query performance compared to Solr3. Queries
which took 3-5 seconds in Solr3 (300 mil docs) are taking 20 seconds with
Solr4. We think this could be due to frequent hard commits and searcher
refresh. Do you think when we change to soft commit and increase the batch
size, we will see better query
What is the difference between:
q=*:*rows=row_countsort=id asc
and
q={X TO *}rows=row_countsort=id asc
Does the first one trys to get all the documents but cut the result or they
are same or...? What happens at underlying process of Solr for that two
queries?
-
From: Furkan KAMACI
Sent: Sunday, July 28, 2013 5:06 PM
To: solr-user@lucene.apache.org
Subject: Query Performance
What is the difference between:
q=*:*rows=row_countsort=id asc
and
q={X TO *}rows=row_countsort=id asc
Does the first one trys to get all the documents but cut the result
: Query Performance
What is the difference between:
q=*:*rows=row_countsort=id asc
and
q={X TO *}rows=row_countsort=id asc
Does the first one trys to get all the documents but cut the result or they
are same or...? What happens at underlying process of Solr for that two
queries?
Nowdays, I've got a urgent task to improve the OR query performance with
solr.
I have deployed 9 shards with solr-cloud in two server(each server : 16 cores,
32G RAM).
The total document count: 60,000,000, total index size : 9G.
According to the requirement, I have to use the OR query to get
On Wed, Jul 3, 2013 at 6:48 AM, huasanyelao huasanye...@163.com wrote:
Nowdays, I've got a urgent task to improve the OR query performance with
solr.
I have deployed 9 shards with solr-cloud in two server(each server : 16
cores, 32G RAM).
The total document count: 60,000,000, total index
On Jul 3, 2013, at 05:48 , huasanyelao huasanye...@163.com wrote:
Nowdays, I've got a urgent task to improve the OR query performance with
solr.
I have deployed 9 shards with solr-cloud in two server(each server : 16
cores, 32G RAM).
The total document count: 60,000,000, total index size
1 - 100 of 295 matches
Mail list logo