With solrclound all cores are collections. The collections API it just a 
wrapper to call the core api a million times with one command.

to /solr/admin/cores?action=CREATE&name=core1&collection=core1&shard=1

Basically your "creating" the shard again, after leader props have gone out. 
Solr will check ZK and find a core meeting that description, then simply get a 
copy of the index from the leader of that shard.


On Feb 3, 2013, at 10:37 AM, Brett Hoerner <br...@bretthoerner.com> wrote:

> What is the inverse I'd use to re-create/load a core on another machine but
> make sure it's also "known" to SolrCloud/as a shard?
> 
> 
> On Sat, Feb 2, 2013 at 4:01 PM, Joseph Dale <joey.d...@gmail.com> wrote:
> 
>> 
>> To be more clear lets say bob it the leader of core 1. On bob do a
>> /admin/cores?action=unload&name=core1. This removes the core/shard from
>> bob, giving the other servers a chance to grab leader props.
>> 
>> -Joey
>> 
>> On Feb 2, 2013, at 11:27 AM, Brett Hoerner <br...@bretthoerner.com> wrote:
>> 
>>> Hi,
>>> 
>>> I have a 5 server cluster running 1 collection with 20 shards,
>> replication
>>> factor of 2.
>>> 
>>> Earlier this week I had to do a rolling restart across the cluster, this
>>> worked great and the cluster stayed up the whole time. The problem is
>> that
>>> the last node I restarted is now the leader of 0 shards, and is just
>>> holding replicas.
>>> 
>>> I've noticed this node has abnormally high load average, while the other
>>> nodes (who have the same number of shards, but more leaders on average)
>> are
>>> fine.
>>> 
>>> First, I'm wondering if that loud could be related to being a 5x replica
>>> and 0x leader?
>>> 
>>> Second, I was wondering if I could somehow flag single shards to
>> re-elect a
>>> leader (or force a leader) so that I could more evenly distribute how
>> many
>>> leader shards each physical server has running?
>>> 
>>> Thanks.
>> 
>> 

Reply via email to