Thanks for the reply Jan. I have been referring to documentation for
SPLISHARD on 7.2.1
<https://lucene.apache.org/solr/guide/7_2/collections-api.html#splitshard>
which
seems to be missing some important information present in 7.6
<https://lucene.apache.org/solr/guide/7_6/collections-api.html#splitshard>.
Especially these two pieces of information.:
"When using splitMethod=rewrite (default) you must ensure that the node
running the leader of the parent shard has enough free disk space i.e.,
more than twice the index size, for the split to succeed "

"The first replicas of resulting sub-shards will always be placed on the
shard leader node"

The idea of having an entire shard (both the replicas of it) present on the
same node did come across as an unexpected behavior at the beginning.
Anyway, I guess I am going to have to take care of the rebalancing with
MOVEREPLICA following a SPLITSHARD.

Thanks for the clarification.


On Mon, Jan 28, 2019 at 3:40 AM Jan Høydahl <jan....@cominvent.com> wrote:

> This is normal. Please read
> https://lucene.apache.org/solr/guide/7_6/collections-api.html#splitshard
> PS: Images won't make it to the list, but don't think you need a
> screenshot here, what you describe is the default behaviour.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 28. jan. 2019 kl. 09:05 skrev Rahul Goswami <rahul196...@gmail.com>:
> >
> > Hello,
> > I am using Solr 7.2.1. I created a two node example collection on the
> same machine. Two shards with two replicas each. I then called SPLITSHARD
> on shard2 and expected the split shards to have one replica on each node.
> However I see that for shard2_1, both replicas reside on the same node. Is
> this a valid behavior?  Unless I am missing something, this could be
> potentially fatal.
> >
> > Here's the query and the cluster state post split:
> >
> http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=gettingstarted&shard=shard2&waitForFinalState=true
> <
> http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=gettingstarted&shard=shard2&waitForFinalState=true>
>
> >
> >
> >
> > Thanks,
> > Rahul
>
>

Reply via email to