Hi All, Normally, when we set up a SolrCloud environment, we put a load balancer in front of the Solr nodes, which we use for sending queries (HttpSolrServer), and use CloudSolrServer (with the IPs for the zookeeper ensemble nodes) for sending indexing operations.
Recently we embarked on a project to automate the construction of SolrClouds in AWS, and the devops guys did it slightly differently. The used 2 LB's - one in front of the Solr nodes (as usual), but also one in front of the Zookeeper nodes (set up as TCP). The application was configured to use CloudSolrServer with the TCP load balancer. Note: they only use the LB on the application client side; Solr nodes themselves have the zkHost set up with the usual zookeeper,nodes/chroot value. I haven't hammered on it yet, but some initial smoke testing show it seems to work so far, if a bit unorthodox. Does anyone have any thought about potential pitfalls of this approach? Thanks.