[ 
https://issues.apache.org/jira/browse/SOLR-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12791633#action_12791633
 ] 

Mahadev konar commented on SOLR-1277:
-------------------------------------

hi all,
 this is mahadev from the zookeeper team. One of our users does similar things 
that you guys have been talking about in the above comments. I am not sure how 
close I am to your scenario but Ill give it a shot. Feel free to ignore my 
comments if they sound stupid. One of the things that they do is -  lets say 
you have a machine A that is running a process P and is part of your cluster. 
The way they track the status of this machine is by having 2 znodes (ZNODE1, 
ZNODE2) in zookeeper. ZNODE1 is an ephemeral node (created by P) and the other 
one (ZNODE2) is a normal node which contains process  P specific data  which is 
updated from time to time by process P (like last time of update, status of 
process P - good/bad/ok). If an application/user wants to access P on machine 
A, they look at the ephemeral node and the data is ZNODE2 to see if process P 
has any problems (not related to zookeeper) and then the application can decide 
if process P actually needs to be marked dead or not. Say the ephemeral node 
ZNODE1 is alive but ZNODE2 shows that process P is in a really bad state, then 
application will go ahead and mark process P as dead. hope this information is 
of  some help!



> Implement a Solr specific naming service (using Zookeeper)
> ----------------------------------------------------------
>
>                 Key: SOLR-1277
>                 URL: https://issues.apache.org/jira/browse/SOLR-1277
>             Project: Solr
>          Issue Type: New Feature
>    Affects Versions: 1.4
>            Reporter: Jason Rutherglen
>            Assignee: Grant Ingersoll
>            Priority: Minor
>             Fix For: 1.5
>
>         Attachments: log4j-1.2.15.jar, SOLR-1277.patch, SOLR-1277.patch, 
> SOLR-1277.patch, SOLR-1277.patch, zookeeper-3.2.1.jar
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> The goal is to give Solr server clusters self-healing attributes
> where if a server fails, indexing and searching don't stop and
> all of the partitions remain searchable. For configuration, the
> ability to centrally deploy a new configuration without servers
> going offline.
> We can start with basic failover and start from there?
> Features:
> * Automatic failover (i.e. when a server fails, clients stop
> trying to index to or search it)
> * Centralized configuration management (i.e. new solrconfig.xml
> or schema.xml propagates to a live Solr cluster)
> * Optionally allow shards of a partition to be moved to another
> server (i.e. if a server gets hot, move the hot segments out to
> cooler servers). Ideally we'd have a way to detect hot segments
> and move them seamlessly. With NRT this becomes somewhat more
> difficult but not impossible?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to