I don’t know how well it worked, but for a while, I did this to warm up the
file buffers.
It should be OK if RAM is bigger than data. Though “cat” probably opens the
files with
the hint that it will never re-read the data.
find /solr-data-dir -type f | xargs cat > /dev/null
Basically, read ever
We don't have all the index size fit in into memory, but we still have an
acceptable performance as of now for reads/query. But with BACKUP we are
seeing a increase in the OS memory usage. Given that, I am sure many of
system might be running with less memory but good enough for their
application.
Thanks for the information. I thought backup is going to be more of the
disk activity. But I understand now that RAM is involved here as well. We
indeed did NOT have enough memory in this box, as it is 64GB box with index
size of 72GB, being backed up. The read (real time GET) performance was
bette
On 9/18/2018 11:00 AM, Ganesh Sethuraman wrote:
We are using Solr 7.2.1 with SolrCloud with 35 collections with 1 node ZK
ensemble (in lower environment, we will have 3 nodes ensemble) in AWS. We
are testing to see if we have Async Solr Cloud backup (
https://lucene.apache.org/solr/guide/7_2/col
Hi
We are using Solr 7.2.1 with SolrCloud with 35 collections with 1 node ZK
ensemble (in lower environment, we will have 3 nodes ensemble) in AWS. We
are testing to see if we have Async Solr Cloud backup (
https://lucene.apache.org/solr/guide/7_2/collections-api.html#backup) done
every time we a