On 1/9/2013 8:54 PM, Mark Miller wrote:
I'd put everything into one. You can upload different named sets of config 
files and point collections either to the same sets or different sets.

You can really think about it the same way you would setting up a single node 
with multiple cores. The main difference is that it's easier to share sets of 
config files across collections if you want to. You don't need to at all though.

I'm not sure if xinclude works with zk, but I don't think it does.

Thank you for your assistance. I'll work on recombining my solrconfig.xml. Are there any available full examples of how to set up and start both zookeeper and Solr? I'll be using the included Jetty 8.

Specific questions that have come to mind:

If I'm planning multiple collections with their own configs, do I still need to bootstrap zookeeper when I start Solr, or should I start it up with the zkHost parameter and then use the collection admin to upload information? I have not looked closely at the collection admin yet, I just know that it exists.

I have heard that if a replica node is down long enough that transaction logs are not enough to fully fix that node, SolrCloud will initiate a full replication. Is that the case? If so, is it necessary to configure the replication handler with a specific path for the name, or does SolrCloud handle that itself?

Is there an option on updateLog that controls how many transactions are kept, or is that managed automatically by SolrCloud? I have read some things that talk about 100 updates. I expect updates on this to be extremely frequent and small, so 100 updates isn't much, and I may want to increase that.

Is it expected with future versions of Solr that I could upgrade one of my nodes to 4.2 or 4.3 and have it work with the other node still at 4.1? I would also hope that would mean that the last 4.x release would work with 5.0. That would make it possible to do rolling upgrades with no downtime.

Thanks,
Shawn

Reply via email to