On 2/22/2021 12:52 AM, Danilo Tomasoni wrote:
we are running a solr instance with around 41 MLN documents on a SATA class 10 
disk with around 10.000 rpm.
We are experiencing very slow query responses (in the order of hours..) with an 
average of 205 segments.
We made a test with a normal pc and an SSD disk, and there the same solr 
instance with the same data and the same number of segments was around 45 times 
faster.
Force optimize was also tried to improve the performances, but it was very 
slow, so we abandoned it.

Since we still don't have enterprise server ssd disks, we are now wondering if 
in the meanwhile defragmenting the solrdata folder can help.
The idea is that due to many updates, each segment file is fragmented across 
different phisical blocks.
Put in another way, each segment file is non-contiguous on disk, and this can 
slow-down the solr response.

The absolute best thing you can do to improve Solr performance is add memory.

The OS automatically uses unallocated memory to cache data on the disk. Because memory is far faster than any disk, even SSD, it performs better.

I wrote a wiki page about it:

https://cwiki.apache.org/confluence/display/solr/SolrPerformanceProblems

If you have sufficient memory, the speed of your disks will have little effect on performance. It's only in cases where there is not enough memory that disk performance will matter.

Thanks,
Shawn

Reply via email to