The REBALANCELEADERS stuff was put in to deal with 100s of leaders
winding up on a single machine in a case where extremely high
throughput was required. Until you get into pretty high scale the
additional "work" on a leader is minimal. So unless your CPU usage is
consistently significantly higher on the machine with all the leaders,
I wouldn't worry about it.

Otherwise there isn't much you can do I'm afraid. If you have
asymmetric replica placement leaders will tend to different machines.
You could try to take the JVM down on the machine with all the leaders
and let leader election redistribute, but I that's not a long-term
solution.

Best,
Erick

On Tue, Jul 26, 2016 at 9:27 PM, Tim Chen <tim.c...@sbs.com.au> wrote:
> Hi Guys,
>
> I am running a Solr Cloud 4.10, with 4 Solr servers and 5 Zookeeper setup.
>
> Solr servers:
> solr01, solr02, solr03, solr04
>
> I have around 20 collections in Solr cloud, and there are 4 Shards for each 
> Collection. For each Shard, I have 4 Replicas, and sitting on each Solr 
> server, with one of them is the Shard Leader.
>
> The issue I am having right now is all the Shard Leader are pointing to the 
> same server, eg: solr01.  When there are documents update, they are all 
> pushed to the Leader. I really want to distribute the Shard Leader across all 
> 4 Solr servers.
>
> I noticed Solr 6 has a "REBALANCELEADERS" command to do that, but not 
> available in Solr 4.
>
> Questions:
>
> 1, Is my setup OK? with 4 Shards for each Collection and 4 Replicas for each 
> Shard. Each Solr server has full set of documents.
> 2, To distribute the Shard Leader to different Solr servers, can I somehow 
> shutdown a single Replica that is currently a Shard Leader and force Solr to 
> elect a different replica to be new Shard Leader?
>
> Thanks guys!
>
> Regards,
> Tim
>
>
> [Roots Wednesday 27 July 8.30pm]<http://www.sbs.com.au/programs/roots/>

Reply via email to