Re: commit it taking 1300 ms

2016-09-02 Thread Pushkar Raste
It would be worth looking into iostats of your disks. On Aug 22, 2016 10:11 AM, "Alessandro Benedetti" wrote: > I agree with the suggestions so far. > The cache auto-warming doesn't seem the problem as the index is not massive > and the auto-warm is for only 10 docs. >

Re: commit it taking 1300 ms

2016-08-22 Thread Alessandro Benedetti
I agree with the suggestions so far. The cache auto-warming doesn't seem the problem as the index is not massive and the auto-warm is for only 10 docs. Are you using any warming query for the new searcher ? Are you using soft or hard commit ? This can make the difference ( soft are much cheaper,

Re: commit it taking 1300 ms

2016-08-12 Thread Esther-Melaine Quansah
Midas, I’d like further clarification as well. Are you sending commits along with each document that you’re POSTing to Solr? If so, you’re essentially either opening a new searcher or flushing to disk with each POST which could explain latency between each request. Thanks, Esther > On Aug

Re: commit it taking 1300 ms

2016-08-11 Thread Erick Erickson
bq: we post json documents through the curl it takes the time (same time i would like to say that we are not hard committing ). that curl takes time i.e. 1.3 sec. OK, I'm really confused. _what_ is taking 1.3 seconds? When you said commit, I was thinking of Solr's commit operation, which is

Re: commit it taking 1300 ms

2016-08-11 Thread Emir Arnautovic
Hi Midas, 1. How many indexing threads? 2. Do you batch documents and what is your batch size? 3. How frequently do you commit? I would recommend: 1. Move commits to Solr (set auto soft commit to max allowed time) 2. Use batches (bulks) 3. tune bulk size and number of threads to achieve max

Re: commit it taking 1300 ms

2016-08-11 Thread Midas A
Emir, other queries: a) Solr cloud : NO b) c) d) e) we are using multi threaded system. On Thu, Aug 11, 2016 at 11:48 AM, Midas A wrote: > Emir, > > we post json documents through the curl it takes the time (same time i > would like to say that we are not hard

Re: commit it taking 1300 ms

2016-08-11 Thread Midas A
Emir, we post json documents through the curl it takes the time (same time i would like to say that we are not hard committing ). that curl takes time i.e. 1.3 sec. On Wed, Aug 10, 2016 at 2:29 PM, Emir Arnautovic < emir.arnauto...@sematext.com> wrote: > Hi Midas, > > According to your

Re: commit it taking 1300 ms

2016-08-10 Thread Emir Arnautovic
Hi Midas, According to your autocommit configuration and your worry about commit time I assume that you are doing explicit commits from client code and that 1.3s is client observed commit time. If that is the case, than it might be opening searcher that is taking time. How do you index data

Re: commit it taking 1300 ms

2016-08-09 Thread Midas A
Thanks for replying index size:9GB 2000 docs/sec. Actually earlier it was taking less but suddenly it has increased . Currently we do not have any monitoring tool. On Tue, Aug 9, 2016 at 7:00 PM, Emir Arnautovic < emir.arnauto...@sematext.com> wrote: > Hi Midas, > > Can you give us more

Re: commit it taking 1300 ms

2016-08-09 Thread Emir Arnautovic
Hi Midas, Can you give us more details on your index: size, number of new docs between commits. Why do you think 1.3s for commit is to much and why do you need it to take less? Did you do any system/Solr monitoring? Emir On 09.08.2016 14:10, Midas A wrote: please reply it is urgent. On

Re: commit it taking 1300 ms

2016-08-09 Thread Midas A
please reply it is urgent. On Tue, Aug 9, 2016 at 11:17 AM, Midas A wrote: > Hi , > > commit is taking more than 1300 ms . what should i check on server. > > below is my configuration . > > ${solr.autoCommit.maxTime:15000} < > openSearcher>false >

commit it taking 1300 ms

2016-08-08 Thread Midas A
Hi , commit is taking more than 1300 ms . what should i check on server. below is my configuration . ${solr.autoCommit.maxTime:15000} < openSearcher>false ${solr.autoSoftCommit.maxTime:-1}