And then delete replica shard2-->server1:8983 and add replica shard2-->server2:7574 ?
Would be nice to have some automatic logic like ES (_cluster/reroute with move). Regards Bernd Am 08.05.2017 um 14:16 schrieb Amrit Sarkar: > Bernd, > > When you create a collection via Collections API, the internal logic tries > its best to equally distribute the nodes across the shards but sometimes it > don't happen. > > The best thing about SolrCloud is you can manipulate its cloud architecture > on the fly using Collections API. You can delete a replica of one > particular shard and add a replica (on a specific machine/node) to any of > the shards anytime depending to your design. > > For the above, you can simply: > > call DELETEREPLICA api on shard1--->server2:7574 (or the other one) > > https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-DELETEREPLICA:DeleteaReplica > > boss ------ shard1 > | |-- server2:8983 (leader) > | > --- shard2 ----- server1:8983 > | |-- server5:7575 (leader) > | > --- shard3 ----- server3:8983 (leader) > | |-- server4:8983 > | > --- shard4 ----- server1:7574 (leader) > | |-- server4:7574 > | > --- shard5 ----- server3:7574 (leader) > |-- server5:8983 > > call ADDREPLICA api on shard1---->server1:8983 > > https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-DELETEREPLICA:DeleteaReplica > > boss ------ shard1 ----- server1:8983 > | |-- server2:8983 (leader) > | > --- shard2 ----- server1:8983 > | |-- server5:7575 (leader) > | > --- shard3 ----- server3:8983 (leader) > | |-- server4:8983 > | > --- shard4 ----- server1:7574 (leader) > | |-- server4:7574 > | > --- shard5 ----- server3:7574 (leader) > |-- server5:8983 > > Hope this helps. > > Amrit Sarkar > Search Engineer > Lucidworks, Inc. > 415-589-9269 > www.lucidworks.com > Twitter http://twitter.com/lucidworks > LinkedIn: https://www.linkedin.com/in/sarkaramrit2 > > On Mon, May 8, 2017 at 5:08 PM, Bernd Fehling < > bernd.fehl...@uni-bielefeld.de> wrote: > >> My assumption was that the strength of SolrCloud is the distribution >> of leader and replica within the Cloud and make the Cloud somewhat >> failsafe. >> But after setting up SolrCloud with a collection I have both, leader and >> replica, on the same shard. And this should be failsafe? >> >> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params >> replicationFactor=2&routerName=compositeId&collection.configName=boss& >> maxShardsPerNode=1&name=boss&router.name=compositeId&action= >> CREATE&numShards=5 >> >> boss ------ shard1 ----- server2:7574 >> | |-- server2:8983 (leader) >> | >> --- shard2 ----- server1:8983 >> | |-- server5:7575 (leader) >> | >> --- shard3 ----- server3:8983 (leader) >> | |-- server4:8983 >> | >> --- shard4 ----- server1:7574 (leader) >> | |-- server4:7574 >> | >> --- shard5 ----- server3:7574 (leader) >> |-- server5:8983 >> >> From my point of view, if server2 is going to crash then shard1 will >> disappear and >> 1/5th of the index is missing. >> >> What is your opinion? >> >> Regards >> Bernd >> >> >> >> >