Thanks Otis. commitWithin  will definitely work for me ( as I
currently am using 3.4 version, which doesnt have NRT yet ).

Assuming that I use commitWithin=10secs, are you saying that the
continuous deletes ( without commit ) wont have any affect on
performance ?
I was under the impression that deletes just mark the doc-ids (
essentially means that the index size will remain the same ) , but
wont actually do the compaction till someone calls optimize/commit, is
my assumption  not true ?

-Thanks,
Prasenjit

On Mon, Feb 6, 2012 at 1:13 PM, Otis Gospodnetic
<otis_gospodne...@yahoo.com> wrote:
> Hi Prasenjit,
>
> It sounds like at this point your main enemy might be those per-doc-add 
> commits.  Don't commit until you need to see your new docs in results.  And 
> if you need NRT then use softCommit option with Solr trunk 
> (http://search-lucene.com/?q=softcommit&fc_project=Solr) or use commitWithin 
> to limit commit's "performance damage".
>
>
>  Otis
>
> ----
> Performance Monitoring SaaS for Solr - 
> http://sematext.com/spm/solr-performance-monitoring/index.html
>
>
>
>>________________________________
>> From: prasenjit mukherjee <prasen....@gmail.com>
>>To: solr-user <solr-user@lucene.apache.org>
>>Sent: Monday, February 6, 2012 1:17 AM
>>Subject: effect of continuous deletes on index's read performance
>>
>>I have a use case where documents are continuously added @ 20 docs/sec
>>( each doc add is also doing a commit )  and docs continuously getting
>>deleted at the same rate. So the searchable index size remains the
>>same : ~ 400K docs ( docs for last 6 hours ~ 20*3600*6).
>>
>>Will it have pauses when deletes triggers compaction. Or with every
>>commits ( while adds ) ? How bad they will effect on search response
>>time.
>>
>>-Thanks,
>>Prasenjit
>>
>>
>>

Reply via email to