You really only have a few options:
1> set up a Solr instance on some backup machine
     and either manually (i.e. by issuing an HTTP
     request) causing a replication to occur when
     you want (see:
     http://wiki.apache.org/solr/SolrReplication#HTTP_API)
2> suspend indexing and just copy your data/index
     directory somewhere (actually, I'd copy the
     entire data index and subdirectories).
3> Keep the original input around somewhere so you
     can re-index from scratch. Note that this is
     probably better than storing all your fields because
     in the unlikely event your index does get
     corrupted, you have *all* the original data
     around, and if you wanted to, for instance,
     change your schema you could make the
     changes from the original data which would
     be more robust.

Best
Erick

P.S. You are *too* using Lucene <G>. The search
engine that comes with Solr is exactly a corresponding
release of Lucene, you just don't use the Lucene
API directly, but Solr does.

On Wed, Oct 5, 2011 at 2:57 PM, Luis Cappa Banda <luisca...@gmail.com> wrote:
> Hello, Andrzej.
>
> First of all thanks for your help. The thing is that I´m not using Lucene:
> I´m using Solr to index (well, I know that it envolves Lucene). I know about
> Solr replication, but the index is being modify in real time includying new
> documents with new petitions incoming. In resume, from the batch indexation
> we load a Solr index, but then the index is updated with new documents.
> That´s the reason that we need a daily backup to prevent corruption. Any
> other solution? I thought about setting all fields to stored=true and to
> develop an application with Solrj that reindexes, but I don´t like
> configuring all the fields as stored=true...
>
> Thanks.
>

Reply via email to