[jira] [Commented] (SOLR-3126) We should try to do a quick sync on std start up recovery before trying to do a full blown replication.
[ https://issues.apache.org/jira/browse/SOLR-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210967#comment-13210967 ] Mark Miller commented on SOLR-3126: --- Hmm...somehow this has made regular replication recovery unstable in some situations (fairly often on apache jenkins, less often locally)...trying to figure out where/how. We should try to do a quick sync on std start up recovery before trying to do a full blown replication. --- Key: SOLR-3126 URL: https://issues.apache.org/jira/browse/SOLR-3126 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3126.patch, SOLR-3126.patch just more efficient - especially on cluster shutdown/start where the replicas may all be up to date and match anway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3126) We should try to do a quick sync on std start up recovery before trying to do a full blown replication.
[ https://issues.apache.org/jira/browse/SOLR-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13211017#comment-13211017 ] Mark Miller commented on SOLR-3126: --- I *think* ive made some progress on tracking this down. It looks like perhaps the 4 second wait we do to make sure no updates are still finishing that started seeing stale state might not be long enough after some stuff was rearranged. Boosting that wait is getting me better results - still testing though. We should try to do a quick sync on std start up recovery before trying to do a full blown replication. --- Key: SOLR-3126 URL: https://issues.apache.org/jira/browse/SOLR-3126 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3126.patch, SOLR-3126.patch just more efficient - especially on cluster shutdown/start where the replicas may all be up to date and match anway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3126) We should try to do a quick sync on std start up recovery before trying to do a full blown replication.
[ https://issues.apache.org/jira/browse/SOLR-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13207897#comment-13207897 ] Mark Miller commented on SOLR-3126: --- Whoops - was not building the leader url correctly - fixed. I'll commit this soon. We should try to do a quick sync on std start up recovery before trying to do a full blown replication. --- Key: SOLR-3126 URL: https://issues.apache.org/jira/browse/SOLR-3126 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.0 Attachments: SOLR-3126.patch just more efficient - especially on cluster shutdown/start where the replicas may all be up to date and match anway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org