[ https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13803499#comment-13803499 ]
Yago Riveiro commented on SOLR-5381: ------------------------------------ I hit the ZK limit of 1M for node with more than 10K with 3 shard and replicationFactor=2. I found a workaround for this using the -Djute.maxbuffer parameter configured on ZK and Solr, but the ZK's documentation says that can be unstable. I don't know if the fact of have a clusterstate.json with so many collections can degrade the performance, but is too difficult to manage. If each collection had its own clusterstate.json, maybe migrate collection to other cluster will be more easy, you only need to copy the clusterstate to other cluster, the folders of cores and it's done. You had a problematic collection with its own resources. > Split Clusterstate and scale > ----------------------------- > > Key: SOLR-5381 > URL: https://issues.apache.org/jira/browse/SOLR-5381 > Project: Solr > Issue Type: Improvement > Components: SolrCloud > Reporter: Noble Paul > Assignee: Noble Paul > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > clusterstate.json is a single point of contention for all components in > SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes > because there are too many updates and too many nodes need to be notified of > the changes. As the no:of nodes go up the size of clusterstate.json keeps > going up and it will soon exceed the limit impossed by ZK. > The first step is to store the shards information in separate nodes and each > node can just listen to the shard node it belongs to. We may also need to > split each collection into its own node and the clusterstate.json just > holding the names of the collections . > This is an umbrella issue -- This message was sent by Atlassian JIRA (v6.1#6144) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org