Hi,

Do you want 5 replicas? 1 or 2 is enough.

If you already have 100 million records, you don't need to do batch
indexing. Push it once, Solr has the capability to soft commit every N docs.

Use round robin and send documents to different core. When you search,
search from all the cores.

How you want to setup your servers. Master & Slave OR Fail over. In case of
Master & Slave, Index documents in master and do search from replica cores.
In case of Fail over, your replica will be used once your main server is
failed.

Regards
Aditya
www.findbestopensource.com




On Tue, Jul 30, 2013 at 4:56 AM, SolrLover <bbar...@gmail.com> wrote:

> I need some advice on the best way to implement Batch indexing with soft
> commit / Push indexing (via queue) with soft commit when using SolrCloud.
>
> *I am trying to figure out a way to:
> *
> 1. Make the push indexing available almost real time (using soft commit)
> without degrading the search / indexing performance.
> 2. Ability to not overwrite the existing document (based on listing_id, I
> assume I can use overwrite=false flag to disable overwrite).
> 3. Not block the push indexing when delta indexing happens (push indexing
> happens via UI, user should be able to search for the document pushed via
> UI
> almost instantaneously). Delta processing might take more time to complete
> indexing and I don't want the queue to wait until the batch processing is
> complete.
> 4. Copy the updated collection for backup.
>
> *More information on setup:
> *We have 100 million records (around 6 stored fields / 12 indexed fields).
> We are planning to have 5 cores (each with 20 million documents) with 5
> replicas.
> We will be always doing delta batch indexing.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Cloud-How-to-balance-Batch-and-Queue-indexing-tp4081169.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to