[ 
https://issues.apache.org/jira/browse/SOLR-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12784628#action_12784628
 ] 

Mark Miller commented on SOLR-1277:
-----------------------------------

bq. zookeeper component can be kept by CoreContainer.

It is in this patch.

bq. Zookeeper has a standard conf file. Why don't we use the same thing instead 
of inventing new system properties.

Huh? These properties are not what you would set with the ZooKeeper conf 
property. If we get enough properties, I can see moving them out to a conf file 
and specifying that, but if we stick to a few, my preference would be to avoid 
the conf file until we determine it makes sense to use one. But these 
properties are not duplicating anything in the zookeeper conf file, so I'm not 
sure I get your point.

The ZooKeeper conf is for configuring the ZooKeeper quorum - that is and should 
be separate from whats going on this patch, which deals with the ZooKeeper 
client. You start the quorum separately (using a ZooKeeper conf), and then Solr 
will connect to it. When starting Solr, you will want to be able to give it the 
address of the quorum you want to connect to.

> Implement a Solr specific naming service (using Zookeeper)
> ----------------------------------------------------------
>
>                 Key: SOLR-1277
>                 URL: https://issues.apache.org/jira/browse/SOLR-1277
>             Project: Solr
>          Issue Type: New Feature
>    Affects Versions: 1.4
>            Reporter: Jason Rutherglen
>            Assignee: Grant Ingersoll
>            Priority: Minor
>             Fix For: 1.5
>
>         Attachments: log4j-1.2.15.jar, SOLR-1277.patch, SOLR-1277.patch, 
> zookeeper-3.2.1.jar
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> The goal is to give Solr server clusters self-healing attributes
> where if a server fails, indexing and searching don't stop and
> all of the partitions remain searchable. For configuration, the
> ability to centrally deploy a new configuration without servers
> going offline.
> We can start with basic failover and start from there?
> Features:
> * Automatic failover (i.e. when a server fails, clients stop
> trying to index to or search it)
> * Centralized configuration management (i.e. new solrconfig.xml
> or schema.xml propagates to a live Solr cluster)
> * Optionally allow shards of a partition to be moved to another
> server (i.e. if a server gets hot, move the hot segments out to
> cooler servers). Ideally we'd have a way to detect hot segments
> and move them seamlessly. With NRT this becomes somewhat more
> difficult but not impossible?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to