Did some further digging and found that as grouping is enabled query result
cache is not having any inserts . Only disabling grouping adds an entry in
query result cache. Is there a way we can cache grouped results because as
per wiki there is parameter group.cache.percent but then again it doesnt
Hi
We are using Solr 6.6.2 version with 10g of memory (2 slaves and 1 master )
allocated to each and have around 30 docs. Our queries have
combinations of q, fq and facets with grouping enabled...
We have enabled filter cache and query result cache with 2048 entries.
Recently we performed
issues
I enabled LTR feature extraction and response times spiked. I suppose that was
to be expected, but are there any tips regarding performance? I have the
feature values cache set up as described in the docs:
Do I simply have to wait for the cache to fill up and hope that response times
go
Hi,
Did not encounter this issue with solr 6.x. But delta import with cache
executes nested query for every element encountered in parent query. Since
this select does not have where clause because we are using cache, it takes
long time. So delta import witch cache is very slow. My observation
and over and over? Even having one character different in it (even
different orders, i.e. a clause like fq=id:(a OR b) will not be reused
for fq=id:(b OR a)).
So consider using the TermsQParserPlugin and set cache false for the fq clause.
Best,
Erick
On Fri, Jun 2, 2017 at 1:26 PM, Daniel Angelov
typos in the previous mail, "fg" should be "fq"
Am 02.06.2017 18:15 schrieb "Daniel Angelov" <dani.b.ange...@gmail.com>:
> This means, that quering alias NNN pointing 3 collections, each 10 shards
> and each 2 replicas, a query with very long fg value, say 2
ry long fg value, say 20 char
> string. First query with fq will cache all 20 chars 30 times (3 x 10
> cores). The next query with the same fg, could not use the same cores as
> the first time, i.e. could locate more mem in the unused replicas from the
> first query. And in my ca
This means, that quering alias NNN pointing 3 collections, each 10 shards
and each 2 replicas, a query with very long fg value, say 20 char
string. First query with fq will cache all 20 chars 30 times (3 x 10
cores). The next query with the same fg, could not use the same cores
be useful eventually hits
all the replicas. And the most common ones are run during autowarming
since it's an LRU queue.
To understand why there isn't a common cache, consider that the
filterCache is conceptually a map. The key is the fq clause and the
value is a bitset where each bit c
On 6/1/2017 11:40 PM, Daniel Angelov wrote:
> > > Is the filter cache separate for each host and then for each
> > > collection and then for each shard and then for each replica in
> > > SolrCloud? For example, on host1 we have, coll1 shard1 replica1 and
> > >
Thanks for the correction Shawn. Yes its only the heap allocation settings
are per host/JVM.
On Fri, Jun 2, 2017 at 9:23 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 6/1/2017 11:40 PM, Daniel Angelov wrote:
> > Is the filter cache separate for each host and then for each
On 6/1/2017 11:40 PM, Daniel Angelov wrote:
> Is the filter cache separate for each host and then for each
> collection and then for each shard and then for each replica in
> SolrCloud? For example, on host1 we have, coll1 shard1 replica1 and
> coll2 shard1 replica1, on host2 we have,
The heap allocation and cache settings are per host/JVM not for each
collection / shards. In SolrCloud you execute queries against a collection
and every other collection may have different schema/document id's and
all. So answer to your question, query1 from coll1 can't use results
cached from
Is the filter cache separate for each host and then for each collection and
then for each shard and then for each replica in SolrCloud?
For example, on host1 we have, coll1 shard1 replica1 and coll2 shard1
replica1, on host2 we have, coll1 shard2 replica2 and coll2 shard2
replica2. Does this mean
Memory/cache aside, the fundamental Solr issue is that the Suggester build
operation will read the entire index, even though very few docs have the
relevant fields.
Is there a way to set a 'fq' on the Suggester build?
java.lang.Thread.State: RUNNABLE
itting all docs, even the ones without
> > fields relevant to the suggester.
> >
> > Shawn, I am using ZFS, though I think it's comparable to other setups.
> > mmap() should still be faster, while the ZFS ARC cache may prefer more
> > memory that other OS disk cach
arable to other setups.
> mmap() should still be faster, while the ZFS ARC cache may prefer more
> memory that other OS disk caches.
>
> So, it sounds like I enough memory/swap to hold the entire index. When will
> the memory be released? On a commit?
> https://lucene.apache.org/core/6
cache may prefer more
memory that other OS disk caches.
So, it sounds like I enough memory/swap to hold the entire index. When will
the memory be released? On a commit?
https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/store/MMapDirectory.html
talks about a bug on the close().
On 2 May
On 5/1/2017 10:52 PM, Damien Kamerman wrote:
> I have a Solr v6.4.2 collection with 12 shards and 2 replicas. Each
> replica uses about 14GB disk usage. I'm using Solaris 11 and I see the
> 'Page cache' grow by about 7GB for each suggester replica I build. The
> suggester index it
On Tue, May 2, 2017 at 10:22 AM, Damien Kamerman <dami...@gmail.com> wrote:
> Hi all,
>
> I have a Solr v6.4.2 collection with 12 shards and 2 replicas. Each replica
> uses about 14GB disk usage. I'm using Solaris 11 and I see the 'Page cache'
> grow by about 7GB for each sugg
Hi all,
I have a Solr v6.4.2 collection with 12 shards and 2 replicas. Each replica
uses about 14GB disk usage. I'm using Solaris 11 and I see the 'Page cache'
grow by about 7GB for each suggester replica I build. The suggester index
itself is very small. The 'Page cache' memory is freed when
cription sounds like the inner cache is not reset on the next
> iteration of the outer loop.
>
> This may be connected to
> https://issues.apache.org/jira/browse/SOLR-7843 (Fixed in 5.4)
>
> Or it may be a different bug. I would make a simplest test case (based
> on DIH-db examp
You have nested entities and accumulate the content of the inner
entities in the outer one with caching on an inner one. Your
description sounds like the inner cache is not reset on the next
iteration of the outer loop.
This may be connected to
https://issues.apache.org/jira/browse/SOLR-7843
you give a bit more details. Do you mean one document gets the
> content of multiple documents? And only on delta?
>
> Regards,
> Alex
>
> On 16 Mar 2017 8:53 AM, "Sujay Bawaskar" <sujay.bawas...@firstfuel.com>
> wrote:
>
> Hi,
>
> We are using DIH
Could you give a bit more details. Do you mean one document gets the
content of multiple documents? And only on delta?
Regards,
Alex
On 16 Mar 2017 8:53 AM, "Sujay Bawaskar" <sujay.bawas...@firstfuel.com>
wrote:
Hi,
We are using DIH with cache(SortedMapBackedCache) with sol
Hi,
We are using DIH with cache(SortedMapBackedCache) with solr 5.3.1. We have
around 2.8 million documents in solr and total index size is 4 GB. DIH
delta import is dumping all values of mapped columns to their respective
multi valued fields. This is causing size of one solr document upto 2 GB
On 1/4/2017 3:45 AM, kshitij tyagi wrote:
> Problem:
>
> I am Noticing that my slaves are not able to use proper caching as:
>
> 1. I am indexing on my master and committing frequently, what i am noticing
> is that my slaves are committing very frequently and cache is not bein
are not able to use proper caching as:
1. I am indexing on my master and committing frequently, what i am noticing
is that my slaves are committing very frequently and cache is not being
build properly and so my hit ratio is almost zero for caching.
2. What changes I need to make so
ted by that
handler, only five percent (15000) of them were found in the cache. The
rest of them were not found in the cache at the moment they were made.
Since these numbers come from the queryResultCache, this refers to the
"q" parameter. The filterCache handles things in the fq paramet
Hi Shawn,
Thanks for the reply:
here are the details for query result cache(i am not using NOW in my
queries and most of the queries are common):
- class:org.apache.solr.search.LRUCache
- version:1.0
- description:LRU Cache(maxSize=1000, initialSize=1000,
autowarmCount=10
I found this, which intends to explore the usage of RoaringDocIdSet for solr:
https://issues.apache.org/jira/browse/SOLR-9008
This suggests Lucene’s filter cache already uses it, or did at one point:
https://issues.apache.org/jira/browse/LUCENE-6077
I was playing with id set implementations
On 12/1/2016 8:16 AM, Dorian Hoxha wrote:
> @Shawn
> Any idea why the cache doesn't use roaring bitsets ?
I had to look that up to even know what it was. Apparently Lucene does
have an implementation of that, a class called RoaringDocIdSet. It was
incorporated into the source code in O
@Shawn
Any idea why the cache doesn't use roaring bitsets ?
On Thu, Dec 1, 2016 at 3:49 PM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 12/1/2016 4:04 AM, kshitij tyagi wrote:
> > I am using Solr and serving huge number of requests in my application.
> >
> > I nee
t ration as 0 for all the caches. What does this mean and
> how this can be optimized.
If your hitratio is zero, then none of the queries related to that cache
are finding matches. This means that your client systems are never
sending the same query twice.
One possible reason for a zero hitrati
Hi All,
I am using Solr and serving huge number of requests in my application.
I need to know how can I utilize caching in Solr.
I am seeing hit ratio as 0 for all the caches in Plugins/Stats.
My configurations in solrxml are :
Can someone please help me out here to understand and optimise
-- Forwarded message --
From: kshitij tyagi <kshitij.shopcl...@gmail.com>
Date: Thu, Dec 1, 2016 at 4:34 PM
Subject: Queries regarding solr cache
To: solr-user@lucene.apache.org
Hi All,
I am using Solr and serving huge number of requests in my application.
I need to kn
Hi All,
I am using Solr and serving huge number of requests in my application.
I need to know how can I utilize caching in Solr.
As of now in then clicking Core Selector → [core name] → Plugins / Stats.
I am seeing my hit ration as 0 for all the caches. What does this mean and
how this can be
ory eventually (assuming you have enough physical memory). See:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
If you don't have enough physical memory for that to happen adding
another core won't
help.
2> You can set your documentCache in solrconfig.xml high enough that
it'll
downsides appear to be:
>>
>> * adding 2-10kB of html to each record and the performance hit this might
>> have on searching and retrieving
>> * additional load of ensuring we rebuild Solr's data every time some part of
>> that html changes (but this is minimal
ching and retrieving
> * additional load of ensuring we rebuild Solr's data every time some part of
> that html changes (but this is minimal in our use case)
> * additional cores that we'll want to add to cache other data that isn't yet
> in Solr
>
> Is this a reasonable approa
and retrieving
* additional load of ensuring we rebuild Solr's data every time some part of
that html changes (but this is minimal in our use case)
* additional cores that we'll want to add to cache other data that isn't yet in
Solr
Is this a reasonable approach to avoid running yet another
to me privately.
>>
>> Pasted your email to me below for others.
>>
>> You are still confusing documents and results. Forget about the rows
>> parameter, for this discussion it's irrelevant.
>>
>> The QTime is the time spent searching. It is unaffected by whe
is unaffected by whether a
> document is in the documentCache or not.
> It _solely_ measures the time that Solr/Lucene take to find the top N
> documents (where N is the rows param) and
> record their internal Lucene doc ID.
>
> Increasing the rows or the document cache won't
cument is in the documentCache or not.
It _solely_ measures the time that Solr/Lucene take to find the top N
documents (where N is the rows param) and
record their internal Lucene doc ID.
Increasing the rows or the document cache won't change anything about
the QTime. The documentCache is
t
query has been completed, the warming query sent
from CuRl is much faster. I assume it is because the document cache has updated
with the documents from the modified query. A large number of our queries work
with the same document set, I am trying to get a warming query to populate the
document
Submitting the exact same query twice will return results from the
queryResultCache. I'm not entirely
sure that the firstSearcher events get put into the cache.
So if you change the query even slighty my guess is that you'll see
response times very close to your
original ones of over a second
%3Dcurrent_group}GroupIds_ms:*=20}
hits=2549 status=0 QTime=1263
If I run the same query after the index has registered I see a QTime of over a
second, the second time I run the query I see around 80ms. This leads me to
believe the warming did not occur or the query was not commited to cache
That thread is pretty old and probably talking about the old(est) admin UI
(before 4.0). The cache stats can be found selecting the core in the
dropdown and then "Plugin/Stats".
See
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=32604180
Tomás
On Sat, Sep 24, 201
Solr evolves pretty quickly. The link you reference is from 2006,
almost 10 years ago, nothing about that link is relevant at this
point.
Go to http://host:port/solr. Then select a core from the drop-down.
>From there, there should be a plugins/stats choice, then the "cache"
section.
I'm trying to view the Cache Stats.
After reading this thread: Cache Stats
<http://lucene.472066.n3.nabble.com/Cache-stats-td474558.html> , I can't
seem to find the Statistic page in the SOLR Admin.
Should I be installing some plug-in or do some configuration?
--
View this message in c
Yes. Thanks.
On 9/1/16 4:53 AM, Alessandro Benedetti wrote:
Are you looking for this ?
org/apache/solr/core/SolrConfig.java:243
CacheConfig conf = CacheConfig.getConfig(this, "query/fieldValueCache");
if (conf == null) {
Map args = new HashMap<>();
args.put(NAME,
Are you looking for this ?
org/apache/solr/core/SolrConfig.java:243
CacheConfig conf = CacheConfig.getConfig(this, "query/fieldValueCache");
if (conf == null) {
Map args = new HashMap<>();
args.put(NAME, "fieldValueCache");
args.put("size", "1");
But, the configuration is commented out (disabled). As comments section
mentioned
"The fieldValueCache is created by default even if not configured here"
I would like to know what would be the configuration of default
fieldValueCache created.
On 8/31/16 6:37 PM, Zheng Lin Edwin Yeo wrote:
If I didn't get your question wrong, what you have listed is already the
default configuration that comes with your version of Solr.
Regards,
Edwin
On 30 August 2016 at 07:49, Rallavagu wrote:
> Solr 5.4.1
>
>
>
>
> Wondering what is the default configuration for
Solr 5.4.1
Wondering what is the default configuration for "fieldValueCache".
something similar?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Simulate-doc-linking-via-post-filter-cache-check-tp4275842p4281783.html
Sent from the Solr - User mailing list archive at Nabble.com.
Mikhail, that's an interesting idea. If a terms list could stand in for a
cache that may be helpful. What I don't fully see is how the search would
work. Building an explicit negative terms query with returned IDs doesn't
seem possible as that list would be in the millions. To drastically speed my
u remove all markers or starting from empty collection, and
do softCommit after every add, you can use /terms (TermsComponent) as a
"cache of" inserted doclist_ids.
For me it seems more like transient cache for ETL process, this state makes
sense only for single load operation; and not a
quot; now - every matched doc has a marker (searchid) which makes the Solr
search work. Since it's not possible to do a RDBMS like search joining the 2
doc types, I need to run the saved search: find docs where name=Johnson,
then drop the docs that are not in a doclist.
So, maybe if I manage a
HI,
i would like to load solr documents (based on certain criteria) in
application cache (Hazelcast).
Is there any best way to do it other than firing paginated queries ? Thanks.
Regards,
Anil
On 4/13/2016 4:34 AM, Bastien Latard - MDPI AG wrote:
> Thank you all again for your good and detailed answer.
> I will combine all of them to try to build a better environment.
>
> *Just a last question...*
> /I don't remember exactly when I needed to increase the java heap.../
> /but is it
ust 20GB.
The other 30-40GB is used by the operating system -- for disk caching
(the page cache). It's perfectly normal for physical memory to be
almost completely maxed out. The physical memory graph is nearly
useless for troubleshooting.
Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
P
20GB, which
means that at most Java is taking up 30GB, but it might be just 20GB.
The other 30-40GB is used by the operating system -- for disk caching
(the page cache). It's perfectly normal for physical memory to be
almost completely maxed out. The physical memory graph is nearly
useless for t
*Does this mean that OS will try to cache 47.48Gb for this index? (if
not, how can I know the size of the cache)
*/Or are you speaking about page cache
<https://en.wikipedia.org/wiki/Page_cache>?/*
*
Question #3:
/"documentCache does live in Java heap"
/*Is there a way
On 4/12/2016 3:35 AM, Bastien Latard - MDPI AG wrote:
> Thank you both, Bill and Reth!
>
> Here is my current options from my command to launch java:
> */usr/bin/java -Xms20480m -Xmx40960m -XX:PermSize=10240m
> -XX:MaxPermSize=20480m [...]*
>
> So should I do *-Xms20480m -Xmx20480m*?
> Why? What
This has answers about why giving enough memory to OS is important:
https://wiki.apache.org/solr/SolrPerformanceProblems#OS_Disk_Cache
And as per solr admin dashboard, the os cache (physical memory is almost
utilized where as memory allocated to jvm is not used) so its best to lower
jvm memory
... (80Gb all together)
BTW: what's the difference between dark and light grey in the JVM
representation? (real/virtual memory?)
NOTE: I have only tomcat running on this server (and this is my live
website - /i.e.: quite critical/).
So if document cache is using the OS cache, this might
As per solr admin dashboard's memory report, solr jvm is not using memory
more than 20 gb, where as physical memory is almost full. I'd set
xms=xmx=16 gb and let operating system use rest. And regarding caches:
filter cache hit ratio looks good so it should not be concern. And afaik,
document
You do need to optimize to get rid of the deleted docs probably...
That is a lot of deleted docs
Bill Bell
Sent from mobile
> On Apr 11, 2016, at 7:39 AM, Bastien Latard - MDPI AG
> wrote:
>
> Dear Solr experts :),
>
> I read this very interesting post
Robert Brown wrote:
> Before I go out and throw more RAM into the system, in the above
> example, what would you recommend?
That you try to determine what causes the slow response times.
Replay logged queries (thousands of queries, not just a few) and see if the
pauses
to reading from disk?
Before I go out and throw more RAM into the system, in the above
example, what would you recommend?
Having enough memory available to cache all your index data offers the
best possible performance.
You may be able to achieve acceptable performance when you don't have
into the system, in the above
> example, what would you recommend?
Having enough memory available to cache all your index data offers the
best possible performance.
You may be able to achieve acceptable performance when you don't have
that much memory, but I would try to make sure there's at l
Hi,
If my index data directory size is 70G, and I don't have 70G (plus heap,
etc) in the system, this will occasionally affect search speed right?
When Solr has to resort to reading from disk?
Before I go out and throw more RAM into the system, in the above
example, what would you
Hi,
Your cache will be cleared on soft commits - every two minutes. It seems
that it is either configured to be huge or you have big documents and
retrieving all fields or dont have lazy field loading set to true.
Can you please share your document cache config and heap settings.
Thanks
Problem starts with autowarmCount="5000" - that executes 5000 queries
when new searcher is created and as queries are executed, document cache
is filled. If you have large queryResultWindowSize and queries return
big number of documents, that will eat up memory before new search is
index.
May be TTL was not the right word to use here. I wanted learn the
criteria for an entry to be ejected.
The time varies with the number of new documents fetched. This is an LRU
cache whose size is configured in solrconfig.xml. It's pretty much
unpredictable. If for some odd reason every request ge
TO 200] etc. Make sure that it is done within
those two minute period if there is any indexing activities.
Your index is relatively small so filter cache of initial size of 1000
entries should take around 20MB (assuming single shard)
Thanks,
Emir
On 18.03.2016 17:02, Rallavagu wrote:
On 3/18
something like id:[1
TO 100] and then id:[100 TO 200] etc. Make sure that it is done within
those two minute period if there is any indexing activities.
Would the existing cache be cleared while a active thread is
performing/receiving query?
Your index is relatively small so filter cache of initial
On 3/18/16 8:56 AM, Emir Arnautovic wrote:
Problem starts with autowarmCount="5000" - that executes 5000 queries
when new searcher is created and as queries are executed, document cache
is filled. If you have large queryResultWindowSize and queries return
big number of documents, tha
Solr 5.4 embedded Jetty
Is it the right assumption that whenever a document that is returned as
a response to a query is cached in "Document Cache"?
Essentially, if I request for any entry like /select?q=id:
will it be cached in "Document Cache"? If yes, what is the TTL?
Thanks in advance
Thanks for the recommendations Shawn. Those are the lines I am thinking
as well. I am reviewing application also.
Going with the note on cache invalidation for every two minutes due to
soft commit, wonder how would it go OOM in simply two minutes or is it
likely that a thread is holding
First, I want to make sure when you say "TTL", you're talking about
documents being evicted from the documentCache and not the "Time To Live"
option whereby documents are removed completely from the index.
The time varies with the number of new documents fetched. This is an
On 3/18/2016 8:22 AM, Rallavagu wrote:
> So, each soft commit would create a new searcher that would invalidate
> the old cache?
>
> Here is the configuration for Document Cache
>
> initialSize="10" autowarmCount="0"/>
>
> true
In an earlier mes
So, each soft commit would create a new searcher that would invalidate
the old cache?
Here is the configuration for Document Cache
autowarmCount="0"/>
true
Thanks
On 3/18/16 12:45 AM, Emir Arnautovic wrote:
Hi,
Your cache will be cleared on soft commits - every two minu
gt;
> I have a query that takes about 5secs to complete. The result count is
> about 250 million, and row size is about 25.
> The problem is that this query result is not getting loaded to the query
> cache, so it takes ~5secs every time its issued. I also confirmed this by
> look
Hi,
I have a query that takes about 5secs to complete. The result count is
about 250 million, and row size is about 25.
The problem is that this query result is not getting loaded to the query
cache, so it takes ~5secs every time its issued. I also confirmed this by
looking at the cache stats
I did change the JVM heap size from 16GB to 24GB. Will that make a
difference?
Regards,
Edwin
On 28 January 2016 at 22:10, Alessandro Benedetti <abenede...@apache.org>
wrote:
> As already specified you need to distinguish between Solr Cache and OS
> Memory mapped files.
>
Hi,
During some testing, I've found that the queryResultCache is not used
when I use grouping.
Is there another cache that is being used in this scenario, if so,
which, and how can I ensure they'[re providing a real benefit?
Thanks,
Rob
As already specified you need to distinguish between Solr Cache and OS
Memory mapped files.
What you should clearly notice in your situation is an increase of space
for the OS Memory mapped files.
Which means faster access to index segments ( almost all the different data
structures are memory
On 1/27/2016 8:11 PM, Zheng Lin Edwin Yeo wrote:
> I would like to find out, is the cache in the Solr cleared when I shut down
> the Solr instant and restart it?
>
> I am suspecting that the cache is not entirely cleared, because when I try
> to do a search on the same query
, Jan 27, 2016 at 7:11 PM, Zheng Lin Edwin Yeo
<edwinye...@gmail.com> wrote:
> Hi,
>
> I would like to find out, is the cache in the Solr cleared when I shut down
> the Solr instant and restart it?
>
> I am suspecting that the cache is not entirely cleared, because whe
Hi,
I would like to find out, is the cache in the Solr cleared when I shut down
the Solr instant and restart it?
I am suspecting that the cache is not entirely cleared, because when I try
to do a search on the same query as I did before the search, it still has a
return QTime that is much faster
Thanks Erick and Shawn for your reply.
We have recently upgraded the server RAM from 64MB to 192MB, and I noticed
that this caching occurs after we upgraded the RAM. Previously, the cache
may not even be preserved in the same Solr session.
So is it true that the upgrading of the server RAM
y gets used for
the _exact_ same query
with a different rows parameter. And it doesn't store the full result
set, just the size configured
in solrconfig.xml.
Mikhail was pointing you to the admin>>core>>plugins/stats>>cache>> page.
You can fire your alternate queries and filt
Thanks. The statements on
http://wiki.apache.org/solr/SolrCaching#showItems are not explicitly
enough for my question.
Hi,
some of my solr indices have a low cache-hit-ratio.
1 Does sorting the parts of a single filter-query have impact on
filter-cache- and query-result-cache-hit-ratio?
1.1 Example: fq=field1:(2 or 3 or 1) to fq=field1:(1 or 2 or 3) -> if
1,2,3 are randomly sorted
2 Does sorting the pa
On Mon, Nov 30, 2015 at 12:46 PM, Johannes Siegert <
johannes.sieg...@marktjagd.de> wrote:
> Hi,
>
> some of my solr indices have a low cache-hit-ratio.
>
> 1 Does sorting the parts of a single filter-query have impact on
> filter-cache- and query-result-cache-hit-ratio?
Hi,
I'm trying to import my data from an sql database using the
dataimporthandler. For some nested entity I want to use the cache to cache
the result of my stored procedure. My config looks like this
>
>
>
>
>cacheLookup="product
Hello Jean-Philippe,
You either call it 300 times with the different param value, without cache
or load all rows once and cache them
SQL examples in the doc explain this clear, I suppose.
On Wed, Nov 25, 2015 at 2:27 PM, Jean-Philippe Quéméner <
jeanphilippe.queme...@gmail.com> wrote:
Hi,
Is there a way to make solr not cache the results when we send the query?
(mainly for query result cache). I need to still enable doc and filter
caching.
Let me know if this is possible,
Thanks
Nitin
101 - 200 of 845 matches
Mail list logo