Hey Ravi - yeah, I know this is kind of confusing. The issue is that the true 
state is actually the advertised state in clusterstate.json *and* whether or 
not a node is listed on live_nodes.

The reason this is the case is that if a node just dies, it may have left its 
current in *any* state. The way we know that its no longer connected to 
zookeeper is looking at live_nodes - which are ephemeral and will go away if a 
node goes away.

I've discussed with Sami in the past the idea of the Overseer perhaps taking a 
look when a node goes down and updating it's state in clusterstate.json - not 
sure if there are some gotchyas with that or not though. Right now only a node 
updates its own state and the Overseer reads those states and compiles them 
into clusterstate.json.

In any case, it's not a bug, its expected - but at the same time it might be 
nice if things worked a little nicer if possible. 

On Apr 19, 2012, at 7:11 PM, ravi wrote:

> Hi Mark, 
> 
> Thanks for your response. I did manage to one example running with 2 solr
> instance running and i checked that shards are created and replicated
> properly. 
> 
> The problem that i am now facing is zookeeper's clusterstate. If i kill one
> solr instance (which may hold one or more cores) by pressing CTRL+C,
> zookeeper never show's that instance as *down* and keeps on sowing that
> instance as *active*.
> 
> The other instance, becomes the leader for some of the shards that were
> present in the first instance though. This suggests that zookeeper gets to
> know that one instance went down but for some strange reason its not
> updating clusterstate.json thing. 
> 
> Has this already been reported? or there is something that i am missing? 
> 
> Thanks!
> Ravi
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/SolrCloud-Programmatically-create-multiple-collections-tp3916927p3924698.html
> Sent from the Solr - User mailing list archive at Nabble.com.

- Mark Miller
lucidimagination.com











Reply via email to