iamsanjay commented on PR #2612:
URL: https://github.com/apache/solr/pull/2612#issuecomment-2376310523
Latest Benchmark with different values for maxConnectionPerDestination
```
Benchmark (maxConnectionPerHost) (nodeCount) (numReplicas)
(numShards) (preGenerate) (useSmallDocs) Mode Cnt Score Error
Units
CloudIndexing.indexDoc 1 4 3
4 50000 false thrpt 4 1118.424 ± 847.096
ops/s
CloudIndexing.indexDoc 2 4 3
4 50000 false thrpt 4 1701.146 ± 1943.553
ops/s
CloudIndexing.indexDoc 4 4 3
4 50000 false thrpt 4 1726.712 ± 2448.684
ops/s
CloudIndexing.indexDoc 8 4 3
4 50000 false thrpt 4 1650.018 ± 2238.615
ops/s
CloudIndexing.indexDoc 16 4 3
4 50000 false thrpt 4 1651.497 ± 2117.253
ops/s
CloudIndexing.indexDoc 32 4 3
4 50000 false thrpt 4 1655.557 ± 2347.372
ops/s
CloudIndexing.indexDoc 64 4 3
4 50000 false thrpt 4 1695.647 ± 2139.217
ops/s
CloudIndexing.indexDoc 128 4 3
4 50000 false thrpt 4 1737.919 ± 2381.445
ops/s
```
It's a good improvement as we increase maxConnectionPerStream.
The pattern I see so far that If we increase the
maxConnectionPerDestination then yes there are failures, but all of them are
related to Out Of Memory exception rather than `cancel` stream error.
Clearly, increasing `maxConnectionPerStream` also requires more memory.,
which is understandable.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]