.472066.n3.nabble.com/Unexplained-leader-initiated-recovery-after-updates-tp4178496p4184336.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Shalin Shekhar Mangar.
any inputs on this?
i'm facing the same problem..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unexplained-leader-initiated-recovery-after-updates-tp4178496p4184336.html
Sent from the Solr - User mailing list archive at Nabble.com.
/browse/SOLR-7030
We even tried to remove the replicas to get around this issue. However we
cannot do that for the collection that is serving our live traffic. Any
suggestions ?
Vijay
--
View this message in context:
http://lucene.472066.n3.nabble.com/Re-Unexplained-leader-initiated-recovery
We are experiencing unexpected recovery events when a leader is sending
updates to a replica. A java.net.SocketException: Connection reset² is
encountered when updating the replica which triggers the recovery.
In our previous Solr 4.6.1 installation, update errors triggered retry
logic in the
Here are more details about our setup:
Zookeeper:
* 3 separate hosts in same rack as Solr cluster
* Zookeeper hosts do not run any other processes
Solr:
* total servers: 24 (plus 2 cold standbys in case of host failure)
* physical memory: 65931872 kB (62 GB)
* max JVM heap size: -Xmx10880m ( 10
I have uncovered some additional details in the shard leader log:
2015-01-11 09:38:00.693 [qtp268575911-3617101] INFO
org.apache.solr.update.processor.LogUpdateProcessor – [listings]
webapp=/solr path=/update
params{distrib.from=http://solr05.search.abebooks.com:8983/solr/listings/u
On 1/9/2015 4:54 PM, Lindsay Martin wrote:
I am experiencing a problem where Solr nodes go into recovery following an
update cycle.
snip
For background, here are some details about our configuration:
* Solr 4.10.2 (problem also observed with Solr 4.6.1)
* 12 shards with 2 nodes per shard
Hi all,
I am experiencing a problem where Solr nodes go into recovery following an
update cycle.
Examination of the logs indicates that the recovery is initiated by the shard
master while processing regular update events, because the replica is
unreachable.
For example, the following is