One mechanism that comes to mind is if the swapping slows down an update.
Here's the process
- Leader sends doc to follower
- follower times out
- leader says "that replica must be sick, I'll tell it to recover"
The smoking gun here is if you see any messages about
"leader-initiated recovery". gr
On 12/15/2017 10:53 AM, Bill Oconnor wrote:
> The recovering server has a much larger swap usage than the other servers in
> the cluster. We think this this related to the mmap files used for indexes.
> The server eventually recovers but it triggers alerts for devops which are
> annoying.
>
> I
Hello,
We recently upgraded to SolrCloud 6.6. We are running on Ubuntu servers LTS
14.x - VMware on Nutanics boxs. We have 4 nodes with 32GB each and 16GB for the
jvm with 12GB minimum. Usually it is only using 4-7GB.
We do nightly indexing of partial fields for all our docs ~200K. This usual