I have same experience.. I do have 6.5G Index and update it daily.
Have you ever check that the updated file does not have any document and
tried "commit"? I don't know why, but it takes so long - more than 10
minutes.

Jae Joo

-----Original Message-----
From: Ken Krugler [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 12, 2008 10:34 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit preformance problem

>I have a large solr index that is currently about 6 GB and is suffering
of
>severe performance problems during updates. A commit can take over 10
>minutes to complete. I have tried to increase max memory to the JVM to
over
>6 GB, but without any improvement. I have also tried to turn off
>waitSearcher and waitFlush, which do significantly improve the commit
speed.
>However, the max number of searchers is then quickly reached.

If you have a large index, then I'd recommend having a separate Solr 
installation that you use to update/commit changes, after which you 
use snappuller or equivalent to swap it in to the live (search) 
system.

>Would a switch to another container (currently using Jetty) make any
>difference?

Very unlikely.

>Does anyone have any other tip for improving the performance?

Switch to Lucene 2.3, and tune the new parameters that control memory 
usage during updating.

-- Ken
-- 
Ken Krugler
Krugle, Inc.
+1 530-210-6378
"If you can't find it, you can't fix it"

Reply via email to