[ https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13803109#comment-13803109 ]
Mark Miller commented on SOLR-5381: ----------------------------------- bq. ZK documentation says 1mb is the recommended limit That's because it's kept in RAM and they want to discourage bad patterns. 1mb has not scaled with networks and hardware though - it's arbitrary to say 1mb and not 3mb (which handles thousands of nodes). 3mb will perform just as well as 1 mb. With modern servers ram and network speed, this stuff flies around easily - I saw that on my 1000 node test - the UI was main bottleneck there - it takes a long time to render the cloud screen due to the rendering speed. We also are not constantly working with large files - in a steady state we dont pull or push large files at all to ZK - it's only on a cluster state change. All of this makes 1 mb or 5 mb pretty irrelevant for us - you can test it out and see. > Split Clusterstate and scale > ----------------------------- > > Key: SOLR-5381 > URL: https://issues.apache.org/jira/browse/SOLR-5381 > Project: Solr > Issue Type: Improvement > Components: SolrCloud > Reporter: Noble Paul > Assignee: Noble Paul > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > clusterstate.json is a single point of contention for all components in > SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes > because there are too many updates and too many nodes need to be notified of > the changes. As the no:of nodes go up the size of clusterstate.json keeps > going up and it will soon exceed the limit impossed by ZK. > The first step is to store the shards information in separate nodes and each > node can just listen to the shard node it belongs to. We may also need to > split each collection into its own node and the clusterstate.json just > holding the names of the collections . > This is an umbrella issue -- This message was sent by Atlassian JIRA (v6.1#6144) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org