es
> into depth on the field cache though stops short of discussing how it
> handles eviction. Can anyone confirm if this info is right?
>
> https://lucidworks.com/post/scaling-lucene-and-solr/
>
>
> Also, can anyone speak to how the field cache handles evictions?
>
> Bes
Hi SOLR Community,
Just following up here with an update. I found this article which goes into
depth on the field cache though stops short of discussing how it handles
eviction. Can anyone confirm if this info is right?
https://lucidworks.com/post/scaling-lucene-and-solr/
Also, can anyone
Thanks Shawn. Something seems different between the two because Caffeine
Cache is having much higher volume per hour than our previous
implementation was. So I guess it is then more likely that it is something
actually expected due to a change in what is getting kept/warmed, so I'll
look
On 3/2/2021 3:47 PM, Stephen Lewis Bianamara wrote:
I'm investigating a weird behavior I've observed in the admin page for
caffeine cache metrics. It looks to me like on the older caches, warm-up
queries were not counted toward hit/miss ratios, which of course makes
sense, but on Caffeine cache
Hi SOLR Community,
I'm investigating a weird behavior I've observed in the admin page for
caffeine cache metrics. It looks to me like on the older caches, warm-up
queries were not counted toward hit/miss ratios, which of course makes
sense, but on Caffeine cache it looks like they are. I'm using
Hi SOLR Community,
I've been trying to understand how the field cache in SOLR manages
its evictions, and it is not easily readable from the code or documentation
the simple question of when and how something gets evicted from the field
cache. This cache also doesn't show hit ratio, total hits
Thanks Shawn! This is great clarity, really appreciate it. I'll proceed to
performance testing of the Caffeine Cache
Is there a Jira issue needed for tracking these two documentation updates (
here
<https://lucene.apache.org/solr/guide/8_8/query-settings-in-solrconfig.html#filtercache>
an
On 2/22/2021 1:50 PM, Stephen Lewis Bianamara wrote:
(a) At what version did the caffeine cache reach production stability?
(b) Is the caffeine cache, and really all implementations, able to be used
on any cache, or are the restrictions about which cache implementations may
be used for which
Hi SOLR Community,
I have a question about cache implementations based on some seemingly
inconsistent documentation I'm looking at. I'm currently inquiring about
8.3, but more generally about solr version 8 too for upgrade planning.
In the description in the docs for cache implementations says
Hey Erick,
So I am investigating the point where we can limit the values that are
cached using {!cache=false} (we already use it in some of our cases)
So in general there is 0 evictions on filter cache side but whenever we hit
this max limit there is a spike in evictions as well (which
Well, when you hit the max capacity, cache entries get aged out and are
eligible for GC, so GC
activity increases. But for aging out filterCache entries to be noticeable, you
have to be
flushing a _lot_ of them out. Which, offhand, makes me wonder if you’re using
the filterCache
appropriately
Hi,
I am trying to disable filter cache for some filter queries as they contain
unique ids and cause cache evictions. By adding {!cache=false} the fq is no
longer stored in filter cache, however I have similar conditions in
facet.query and using facet.query={!cache=false}(color:red AND id:XXX
arun Jain-=-On Monday, June 1, 2020, 01:55:56 PM EDT, Jörn Franke
wrote:
You should not have other processes/container running on the same node. They
potentially screw up your os cache making things slow, eg if the other
processes also read files etc they can remove things from Solr
You should not have other processes/container running on the same node. They
potentially screw up your os cache making things slow, eg if the other
processes also read files etc they can remove things from Solr from the Os
cache and then the os cache needs to be filled again.
What performance
unt of memory available, SOLR should be able to keep the
> entire index on the heap (I know OS will also cache the disk blocks)
> My solrconfig has the following:
> 20 class="solr.FastLRUCache" size="512" initialSize="512" autowarmCount=&quo
up SOLR is
that given the amount of memory available, SOLR should be able to keep the
entire index on the heap (I know OS will also cache the disk blocks)
My solrconfig has the following:
20
true
20
200
false
2
I have modified the documentCache size to 8192 from 512 but it has
> $ bin/solr -e techproducts
> ...
>
> # mostly empty caches (techproudct has a single static warming query)
>
> $ curl -sS '
> http://localhost:8983/solr/techproducts/admin/mbeans?wt=json=true=CACHE=true'
> | grep -E
> 'CACHE.searcher.(queryResultCache|filte
...
# mostly empty caches (techproudct has a single static warming query)
$ curl -sS
'http://localhost:8983/solr/techproducts/admin/mbeans?wt=json=true=CACHE=true'
| grep -E
'CACHE.searcher.(queryResultCache|filterCache).(inserts|hits|lookups)'
"CACHE.searcher.queryResultCache.looku
8984/solr/techproducts/select?q=popularity:[5%20TO%2012]=manu:samsung%20OR%20manu:apple
"solr.core.techproducts":{
"CACHE.searcher.filterCache":{
"lookups":3,
"idleEvictions":0,
"evictions":0,
"cumulative_ins
: I was trying to analyze the filter cache performance and noticed a strange
: thing. Upon searching with fq, the entry gets added to the cache the first
: time. Observing from the "Stats/Plugins" tab on Solr admin UI, the 'lookup'
: and 'inserts' count gets incremented.
: However, i
Hello,
I was trying to analyze the filter cache performance and noticed a strange
thing. Upon searching with fq, the entry gets added to the cache the first
time. Observing from the "Stats/Plugins" tab on Solr admin UI, the 'lookup'
and 'inserts' count gets incremented.
However, i
Again depending on the version of Solr, but the metrics end point (added in
6.4) has a TON of information. Be prepared to wade through it for half a day to
find out the things you need ;). There are something like 150 different metrics
returned…
Frankly I don’t remember if cache RAM usage
@Vadim Ivanov<mailto:vadim.iva...@spb.ntk-intourist.ru>
Thank you!
From: Vadim Ivanov
Sent: Tuesday, February 18, 2020 15:27
To: solr-user@lucene.apache.org
Subject: RE: A question about solr filter cache
Hi!
Yes, it may depends on Solr version
Solr 8.3
o:inte...@outlook.com]
> Sent: Tuesday, February 18, 2020 5:32 AM
> To: solr-user@lucene.apache.org
> Subject: Re: A question about solr filter cache
>
> @Erick Erickson<mailto:erickerick...@gmail.com> and @Mikhail Khludnev
>
> got it, the explanation is very cl
estion about solr filter cache
Thank you @Vadim Ivanov<mailto:vadim.iva...@spb.ntk-intourist.ru>
I know that admin page, but I cannot find the memory usage of filter cache
(only has "CACHE.searcher.filterCache.size", I think it's the used slot number
of filtercache)
There is m
Thank you @Vadim Ivanov<mailto:vadim.iva...@spb.ntk-intourist.ru>
I know that admin page, but I cannot find the memory usage of filter cache
(only has "CACHE.searcher.filterCache.size", I think it's the used slot number
of filtercache)
There is my output (solr version 7.3.
That’s the upper limit of a filter cache entry (maxDoc/8). For low numbers of
hits,
more space-efficient structures are used. Specifically a list of doc IDs is
kept. So say
you have an fq clause that marks 10 doc. The filterCache entry is closer to 40
bytes
+ sizeof(query object) etc.
Still
You can easily check amount of RAM used by core filterCache in Admin UI:
Choose core - Plugins/Stats - Cache - filterCache
It shows useful information on configuration, statistics and current RAM
usage by filter cache,
as well as some examples of current filtercaches in RAM
Core, for ex, with 10
at 1:13 AM Hongxu Ma wrote:
> Hi
> I want to know the internal of solr filter cache, especially its memory
> usage.
>
> I googled some pages:
> https://teaspoon-consulting.com/articles/solr-cache-tuning.html
> https://lucene.472066.n3.nabble.com/Solr-Filter-Cache-Size-td
If 1GB would make solr go out of memory by using a filter query cache,
then it would have already happened during the initial upload of the
solr documents. Imagine the amount of memory you need for one billion
documents..
A filter cache would be the least of your problems. 1GB is small
Hi
I want to know the internal of solr filter cache, especially its memory usage.
I googled some pages:
https://teaspoon-consulting.com/articles/solr-cache-tuning.html
https://lucene.472066.n3.nabble.com/Solr-Filter-Cache-Size-td4120912.html
(Erick Erickson's answer)
All of them said its
Hi!
I have some custom cache set up in solrconfig XML for a solr cloud cluster in
Kubernetes. Each node has Kubernetes persistence set up. After I execute a
“delete pod” command to restart a node it goes into Replication Recovery
successfully but my custom cache’s warm() method never gets
Hello,
Which particular cache you are talking about?
On Wed, Oct 30, 2019 at 12:19 AM lawrence antony wrote:
> Dear Sir
>
> Do Solr support segment level cache, so that if only a single segment
> changed then only a small portion of the cached data needs to be refreshed.
>
>
Dear Sir
Do Solr support segment level cache, so that if only a single segment
changed then only a small portion of the cached data needs to be refreshed.
--
with thanks and regards,
lawrence antony.
timizations builds the index again by merging, does this mean that
> the whole cache is also refreshed? If yes, this would mean that we would be
> flushing cache everyday and if we really want to go ahead with this, I
> think, we should relook about autowarming.
> 3. What's tradeoff in our sce
speed is our focus (and not storage resource at this time) and
that optimizations builds the index again by merging, does this mean that
the whole cache is also refreshed? If yes, this would mean that we would be
flushing cache everyday and if we really want to go ahead with this, I
think
On 6/7/2019 8:49 AM, Erick Erickson wrote:
Yes. ZooKeeper has a “blob store”. See the Blob Store API in the ref guide.
Minor nit. You will be creating a jar file, and configuring your collection to
be able to find the new jar file. Then you _upload_ both to ZooKeeper and
reload your
Yes. ZooKeeper has a “blob store”. See the Blob Store API in the ref guide.
Minor nit. You will be creating a jar file, and configuring your collection to
be able to find the new jar file. Then you _upload_ both to ZooKeeper and
reload your collection. The rest should be automatic, Solr
Thanks for the response.
Eric,
Are you suggesting to download this file from zookeeper, and upload it after
changing ?
Mikhail,
Thanks. I will try solrCore.SolrConfg.userCacheConfigs option.
Any idea why, CoreContainer->getCores() would be returning empty list for me
?
gt; Inserting user cache in runtime seems undoable, the closed option is to
> modify solrCore.SolrConfg.userCacheConfigs and obtain new SolrIndexSearch
> after that, but the latter is a tricky thing to achieve.
>
> Another idea is to define one user cache in solrconfig holding other
&
Hello, Abhishek.
It seems config api lacks usercache functionaly, thus it deserves jira.
Inserting user cache in runtime seems undoable, the closed option is to
modify solrCore.SolrConfg.userCacheConfigs and obtain new SolrIndexSearch
after that, but the latter is a tricky thing to achieve
Hi,
I am trying make use of User Defined cache functionality to optimise a
particular workflow.
We are using Solr 7.4.
Step 1. I noticed, first we would have to add Custom Cache entry in
solrconfig.xml.
What’s its Config API alternative for solrCould ?
I couldn’t find one at,
https
You must show us the _exact_ filter queries you’re using, or at least a
representative sample.
Bumping the cache up very high is almost always the wrong thing to do. Each
entry takes approximately maxDoc/8 bytes so unless your corpus is very small,
you’ll eventually blow memory up.
To Markus
On 5/29/2019 7:33 AM, Saurabh Sharma wrote:
Many filters are common among the queries. AFAIK, filter cache are created
against filters and by that logic one should get good hit ratio for those
cached filter conditions.i tried to create a cache of 100K size and that
too was not producing good hit
Hello,
What is missing in that article is you must never use NOW without rounding it
down in a filter query. If you have it, round it down to an hour, day or minute
to prevent flooding the filter cache.
Regards,
Markus
-Original message-
> From:Atita Arora
> Sent: Wednesday 29
You can refer to this one:
https://teaspoon-consulting.com/articles/solr-cache-tuning.html
HTH,
Atita
On Wed, May 29, 2019 at 3:33 PM Saurabh Sharma
wrote:
> Hi Shwan,
>
> Many filters are common among the queries. AFAIK, filter cache are created
> against filters and by that logi
Hi Shwan,
Many filters are common among the queries. AFAIK, filter cache are created
against filters and by that logic one should get good hit ratio for those
cached filter conditions.i tried to create a cache of 100K size and that
too was not producing good hit ratio. Any document/suggetion
On 5/29/2019 6:57 AM, Saurabh Sharma wrote:
What can be the possible reasons for low cache usage?
How can I leverage cache feature for high traffic indexes?
Your usage apparently does not use the exact same query (or filter
query, in the case of filterCache) very often.
In order to achieve
Hi All,
I am trying to run an index on solr cloud version 7.3.1 with 3 nodes.
Planning to index the records using full index once a day and delta index
every 30 minutes. Purpose to keep stale index was to utilize the cache of
solr. But to my surprise, when I put real traffic on this index . cache
orefront. Under normal processing we fill the FVC up to
>> 137
>> and everything runs happy. This roughly corresponds to the number of
>> facetable attributes on the front end.
>>
>> But every so often (seems like it might correlate to indexprop timing),
>> we
>>
number of
> facetable attributes on the front end.
>
> But every so often (seems like it might correlate to indexprop timing), we
> see the FVC climb up over 200.
> When it happens, it drives a bunch of extra CPU as the FVC cache hit ratio
> decreases drastically (at 137 happy mode its
to the number of
facetable attributes on the front end.
But every so often (seems like it might correlate to indexprop timing), we
see the FVC climb up over 200.
When it happens, it drives a bunch of extra CPU as the FVC cache hit ratio
decreases drastically (at 137 happy mode its right about 100% hit ratio
che's low hit rate and i have
> discovered that all of its content gets pushed out by some facet results.
> This field in particular has a very large number of distinct values. I
> have
> tried facet.method=fc and facet.method=fcs, but in both cases, it seems
> like
> an entry
like
an entry gets created in the filter cache for each value of this field,
until the cache is full. Is there a way to "blacklist" a facet field from
the filter cache? We think we'd achieve a much better hit rate with those
fields not interfering with the "good" filters.
Hmm. I am doing the same thing. But, somehow in my browser, after I select the
core, it does not stay selected to view the stats/cache.
Attaching the gif for when I try it.
Anyway, that is a different issue from my side. Thanks for your input.
-Lewin
-Original Message-
From: Shawn
On 4/9/2019 12:38 PM, Lewin Joy (TMNA) wrote:
I just tried to go to the location you have specified. I could not see a "CACHE" . I can
see the "Statistics" section.
I am using Solr 7.2 on solrcloud mode.
If you are trying to select a *collection* from a dropdow
are getting a different value for
allBuckets.
And this got corrected after I explicitly applied one facet value as a filter
one time. I am assuming, it cleared the cache for that filter.
Now, I have few other facet values having the similar issue. Assuming, that
this issue would get resolved if I
On 4/9/2019 11:51 AM, Lewin Joy (TMNA) wrote:
Hmm. I only tried reloading the collection as a whole. Not the core reload.
Where do I see the cache sizes after reload?
If you do not know how to see the cache sizes, then what information are
you looking at which has led you to the conclusion
)
> On Apr 9, 2019, at 10:55 AM, Lewin Joy (TMNA) wrote:
>
> Thank you for email, Alex.
>
> I have the autowarmCount set as 0.
> So, this shouldn't prepopulate with old cache data.
>
> -Lewin
>
> -Original Message-
> From: Alexandre Rafalovitch
>
Thank you for email, Alex.
I have the autowarmCount set as 0.
So, this shouldn't prepopulate with old cache data.
-Lewin
-Original Message-
From: Alexandre Rafalovitch
Sent: Monday, April 8, 2019 6:45 PM
To: solr-user
Subject: Re: Solr Cache clear
You may have warming queries
Hmm. I only tried reloading the collection as a whole. Not the core reload.
Where do I see the cache sizes after reload?
-Lewin
-Original Message-
From: Shawn Heisey
Sent: Monday, April 8, 2019 5:10 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cache clear
On 4/8/2019 2:14 PM
You may have warming queries to prepopulate your cache. Check your
solrconfig.xml.
Regards,
Alex
On Mon, Apr 8, 2019, 4:16 PM Lewin Joy (TMNA), wrote:
> ** PROTECTED 関係者外秘
> How do I clear the solr caches without restarting Solr cluster?
> Is there a way?
> I tried reloading th
On 4/8/2019 2:14 PM, Lewin Joy (TMNA) wrote:
How do I clear the solr caches without restarting Solr cluster?
Is there a way?
I tried reloading the collection. But, it did not help.
When I reload a core on a test setup (solr 7.4.0), I see cache sizes reset.
What evidence are you seeing
** PROTECTED 関係者外秘
How do I clear the solr caches without restarting Solr cluster?
Is there a way?
I tried reloading the collection. But, it did not help.
Thanks,
Lewin
Hello,
We are using Cloudera 5.12.1 with Solr 4.10.3.
We want to store our index in memory since HDFS where the data is
stored is to slow.
We were trying using Solr HDFS block cache, but we are struggling with
warming it to be sure that whole index is in memory after updating
some documents.
We
On 1/30/2019 2:27 AM, sachin gk wrote:
To support an existing functionality we have turned the opensearcher to
false. Is there a way to flush the cache programiticaly.
Executing a commit with openSearcher=true is the only way I know of
without custom code.
When you commit with openSearcher
I'd also ask why you care? What benefit do you think you'd get
if you did explicitly flush the document cache?
You seem to think there's some benefit to programmatically
flushing the cache, but you haven't stated what that benefit is.
I suspect that you are making some assumptions
You don’t need to do that. When there is a commit, Solr creates a new Searcher
with an empty document cache.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jan 29, 2019, at 10:27 PM, sachin gk wrote:
>
> Hi All,
>
> Is there
Thanks Shawn,
To support an existing functionality we have turned the opensearcher to
false. Is there a way to flush the cache programiticaly.
Regards,
Sachin
On Wed, Jan 30, 2019, 12:58 PM Shawn Heisey On 1/29/2019 11:27 PM, sachin gk wrote:
> > Is there a way to clear the *document
On 1/29/2019 11:27 PM, sachin gk wrote:
Is there a way to clear the *document cache* after we commit to the indexer.
All Solr caches are invalidated when you issue a commit with
openSearcher set to true. The default setting is true, and normally it
doesn't get set to false unless you
Hi All,
Is there a way to clear the *document cache* after we commit to the indexer.
--
Regards,
Sachin
a large no of facet.field param. In
solrconfig.xml file we have changed the cache size and reloaded the the
cores as part of this modification.
Reloading cores is not clearing the cache. Is there any reason why we
are getting this much of difference in response time for similar type of
queries
. In
solrconfig.xml file we have changed the cache size and reloaded the the
cores as part of this modification.
Reloading cores is not clearing the cache. Is there any reason why we
are getting this much of difference in response time for similar type of
queries.
On Nov 21, 2018 4:16 AM, "E
Disabling or reducing autowarming can help too, in addition to cache size
reduction.
Edward
Em ter, 20 de nov de 2018 17:29, Erick Erickson Why would you want to? This sounds like an XY problem, there's some
> problem you think would be cured by clearing the cache. What is
> that p
Why would you want to? This sounds like an XY problem, there's some
problem you think would be cured by clearing the cache. What is
that problem?
Because I doubt this would do anything useful, pretty soon the caches
would be filled up again and you'd be right back where you started and
the real
On 11/20/2018 9:25 AM, Rajdeep Sahoo wrote:
Hi all,
Without restarting is it possible to clear the cache?
You'll need to clarify what cache you're talking about, but I think for
the most part that if you reload the core (or collection if running
SolrCloud) that all caches should be rebuilt
Hi all,
Without restarting is it possible to clear the cache?
Is there any reason to use anything other than LRUStatsCache or LocalStatsCache?
It seems like the LRU implementation would be the fastest of the global IDF
implementations.
Also, any experience with the slowdown due to global IDF? I know that could be
done without an additional call. And I
t; > Looking at Solr Admin Panel I've found the CACHE -> fieldValueCache tab
> > where all the values are 0.
> >
> > [...]
> >
> > what do you thing, is that normal?
>
>
> Yep, that's completely normal.
> That cache is only used by certain operation
On Wed, Sep 19, 2018 at 9:44 AM Vincenzo D'Amore wrote:
> Looking at Solr Admin Panel I've found the CACHE -> fieldValueCache tab
> where all the values are 0.
>
> [...]
>
> what do you thing, is that normal?
Yep, that's completely normal.
That cache is only used by certai
port Training - http://sematext.com/
> On 19 Sep 2018, at 15:43, Vincenzo D'Amore wrote:
>
> Hi all,
>
> sorry if I bothered you all but in these days I'm just struggling what's
> going on with my production servers...
>
> Looking at Solr Admin Panel I've found the
Hi all,
sorry if I bothered you all but in these days I'm just struggling what's
going on with my production servers...
Looking at Solr Admin Panel I've found the CACHE -> fieldValueCache tab
where all the values are 0.
class:org.apache.solr.search.FastLRUCache
description:Concurrent LRU Ca
Hi Bojan,
This will be fixed in the upcoming 7.5.0 release. Thank you for reporting this!
> On 6 Sep 2018, at 18:16, Bojan Šmid wrote:
>
> Hi,
>
> it seems the format of cache mbeans changed with 7.4.0. And from what I
> see similar change wasn't made for other mbea
Hi,
it seems the format of cache mbeans changed with 7.4.0. And from what I
see similar change wasn't made for other mbeans, which may mean it was
accidental and may be a bug.
In Solr 7.3.* format was (each attribute on its own, numeric type):
mbean:
solr:dom1=core,dom2=gettingstarted,dom3
Which are you using? schema.xml of managed-schema? You must be using
one or the other, but not both.
It's likely you're using managed-schema, that's where changes need to be made.
Best,
Erick
On Sun, Sep 2, 2018 at 11:55 AM Bineesh wrote:
>
> Hi Govind,
>
> Thanks for the reply. Pleasee below
Hi Govind,
Thanks for the reply. Pleasee below the chema.xml and managed.schema
1: schema.xml
int, float, long, date, double, including the "Trie" variants.
2 : managed.schema
- For maximum indexing performance, use the
e unknown field 'cache' error while indexing the data to Solr so i
> added
> below entry in field section of schema.xml forsolr
>
>
>
> Tried indexing the data again and this time error is unknown field 'date'.
> However i have the
> Please suggest
>
>
>
>
> --
>
Hello Team,
Need suggestions on Solr Indexing. We are using Solr-6.6.3 and Nutch 1.14.
I see unknown field 'cache' error while indexing the data to Solr so i added
below entry in field section of schema.xml forsolr
Tried indexing the data again and this time error is unknown field 'date
There might be something like fq=filter(foo:[2 TO 3]) OR filter(foo:[3 TO
100])
On Fri, Aug 24, 2018 at 2:23 PM zhenyuan wei wrote:
> Hi All,
> I am confuse about How to hit filterCache?
>
> If filterQuery is range [3 to 100] , but not cache in FilterCache,
> and filterCache
On 8/24/2018 5:23 AM, zhenyuan wei wrote:
I am confuse about How to hit filterCache?
If filterQuery is range [3 to 100] , but not cache in FilterCache,
and filterCache already exists filterQuery range [2 to 100],
My question is " Dose this filterQuery range [3 to 100] will fetch D
Hi,
No it will not and it does not make sense to - it would still have to apply
filter on top of cached results since they can include values with 2. You can
consider a query as entry into cache.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticse
Hi All,
I am confuse about How to hit filterCache?
If filterQuery is range [3 to 100] , but not cache in FilterCache,
and filterCache already exists filterQuery range [2 to 100],
My question is " Dose this filterQuery range [3 to 100] will fetch DocSet
from FilterCache range[2 to 100]" ?
.
It make sense to check this cache right after commit.
On Tue, Apr 24, 2018 at 8:56 AM, Papa Pappu <tuhaipa...@gmail.com> wrote:
> Hi,
> I've written down my query over stack-overflow. Here is the link for that :
> https://stackoverflow.com/questions/49993681/preventing-
> sol
Lee C
>
> On 24 April 2018 at 06:56, Papa Pappu <tuhaipa...@gmail.com> wrote:
>
>> Hi,
>> I've written down my query over stack-overflow. Here is the link for that :
>> https://stackoverflow.com/questions/49993681/preventing-
>> solr-cache-flush-when-commitin
On 4/23/2018 11:56 PM, Papa Pappu wrote:
> I've written down my query over stack-overflow. Here is the link for that :
> https://stackoverflow.com/questions/49993681/preventing-solr-cache-flush-when-commiting
>
> In short, I am facing troubles maintaining my solr caches when comm
that :
> https://stackoverflow.com/questions/49993681/preventing-
> solr-cache-flush-when-commiting
>
> In short, I am facing troubles maintaining my solr caches when commits
> happen and the question provides detailed description of the same.
>
> Based on my use-case if someone can
Hi,
I've written down my query over stack-overflow. Here is the link for that :
https://stackoverflow.com/questions/49993681/preventing-solr-cache-flush-when-commiting
In short, I am facing troubles maintaining my solr caches when commits
happen and the question provides detailed description
Thank you for answer. We will improve our system based on what you said.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
searcher, it will
invalidate all caches. Increasing commit time can result in better cache
utilisation and better average query latency. You need to monitor your caches
to see if cache utilisation justifies having caches or if you are doing queries
properly so caches can be utilised.
You mentioned
used, so we plan queryResultCache
for the entire shards.
Is this the right solution what trying to use an external cache?(for
example, redis, memcahced, apache ignite, etc.)
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
1 - 100 of 845 matches
Mail list logo