One way to handle this (presuming SolrCloud) is collection aliasing.
You create two collections, c1 and c2. You then have two aliases. when
you start "index" is aliased to c1 and "search" is aliased to c2. Now
do your full import  to "index" (and, BTW, you'd be well advised to do
at least a hard commit openSearcher=false during that time or you risk
replaying all the docs in the tlog).

When the full import is done, switch the aliases so "search" points to c1 and
"index" points to c2. Rinse. Repeat. Your client apps always use the same alias,
the alias switching makes whether c1 or c2 is being used transparent.
By that I mean your user-facing app uses "search" and your indexing client
uses "index".

You can now do your live updates to the "search" alias that has a soft
commit set.
Of course you have to have some mechanism for replaying all the live updates
that came in when you were doing your full index into the "indexing"
alias before
you switch, but you say you have that handled.

Best,
Erick

On Fri, Mar 3, 2017 at 9:22 AM, Alexandre Rafalovitch
<arafa...@gmail.com> wrote:
> On 3 March 2017 at 12:17, Sales <i...@smallbusinessconsultingexperts.com> 
> wrote:
>> When we enabled those, during the index, the data disappeared since it kept 
>> soft committing during the import process,
>
> This part does not quite make sense. Could you expand on this "data
> disappeared" part to understand what the issue is.
>
> The main issue with "update" is that all fields (apart from pure
> copyField destinations) need to be stored, so the document can be
> reconstructed, updated, re-indexed. Perhaps you have something strange
> happening around that?
>
> Regards,
>    Alex.
>
> ----
> http://www.solr-start.com/ - Resources for Solr users, new and experienced

Reply via email to