On 9/29/2019 11:44 PM, Yasufumi Mizoguchi wrote:
I am trying some tests to confirm if single Solr instance can perform over
1000 queries per second(!).

In general, I would never expect a single instance to handle a large number of queries per second unless the index is REALLY small -- dozens or hundreds of very small documents. A 60GB index definitely does not qualify.

I don't think it will be possible to handle 1000 queries per second with a single server even on a really small index, but I've never actually tried.

But now, although CPU usage is 40% or so and iowait is almost 0%,
throughput does not increase over 60 queries per second.

A query rate of 60 per second is pretty good with an index size of 60GB. The low iowait would tend to confirm that the index is well cached by the OS.

If you need to handle 1000 queries per second, you need more copies of your index on additional Solr servers, with something in the mix to perform load balancing.

Some thoughts:

With your -XX:MaxNewSize=128m setting, you are likely causing garbage collection to occur VERY frequently, which will slow things down. Solr's default GC settings include -XX:NewRatio=3 so that the new generation will be much larger than what you have set. A program like Solr that allocates a lot of memory will need a fairly large new generation.

I agree with the idea of setting the heap to 31GB. Setting it to 31GB will actually leave more memory available to Solr than setting it to 32GB, because of the decreased pointer sizes.

Definitely check what Erick mentioned. If you're seeing what he described, adjusting how threads work might get you more throughput. But look into your new generation sizing first.

Thanks,
Shawn

Reply via email to