You can also turn off automatic replication pulling, and just manually issue a 
'replicate' command to slave exactly when you want, without relying on it being 
triggered by optimization or whatever.  (Well probably not 'manually', probably 
some custom update process you run that you'll have issue the 'replicate' 
command to slave when appropriate for your strategy). 

In case you want to replicate without an optimize, but not on every commit. (An 
optimize will result in more files being 'new' for replication, possibly all of 
them, where a replication without optimize, if most of the index remains the 
same but only a few new documents added/updated, will only result in some new 
files to be pulled).  Or if you wanted to replicate after an optimize but not 
EVERY optimize. 

Or of course, you could just set the replication's poll time to be some high 
number, like an hour or whatever, so it'll only replicate once an hour no 
matter how many commits happen more often. 

Trade-offs either way, to flexibility/control and performance. As far as 
performance, you may just have to measure in your individual actual context, as 
much of a pain as that can be. It seems there are lots of significant 
variables. 
________________________________________
From: kenf_nc [ken.fos...@realestate.com]
Sent: Tuesday, May 10, 2011 4:01 PM
To: solr-user@lucene.apache.org
Subject: Re: how to do offline adding/updating index

Master/slave replication does this out of the box, easily. Just set the slave
to update on Optimize only. Then you can update the master as much as you
want. When you are ready to update the slave (the search instance), just
optimize the master. On the slave's next cycle check it will refresh itself,
quickly, efficiently, minimal impact to search performance. No need to build
extra moving parts for swapping search servers or anything like that.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-do-offline-adding-updating-index-tp2923035p2924426.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to