On Fri, Jun 12, 2009 at 11:12 AM, Newman, Billy<billy.new...@itt.com> wrote:
> I know this has been covered a number of time before but I am still confused.
>
> I am using all the default values for IndexWriter when writing my index.
>
> I loop over all my documents 1000 at a time.  For each 1000 I open an index 
> writer, write each document, optimize the index, then close the index writer. 
>  The problems comes about in the second 1000.  Somewhere in writing the 
> second 1000 I start to get a Too many files open exception.  What I don't 
> understand is why I can write the first 1000 with no problem, optimize and 
> close the writer and then have a problem the next time around.


At some point, adding the second batch probably forces a merge with
the first big segment you created.  Merges are now done in the
background so you can keep adding documents.  Bigger merges will take
longer, and the increase in concurrency means more descriptors are
necessary.

> As an aside I am running redhat and I run 'limit descriptor' and see that it 
> is set to 1024.  Anyone have any idea if this is to low or if there is a 
> recommended value?

That's low for the default merge factor... lower the merge factor,
increase your descriptors (a lot), or use the compound file format.

-Yonik
http://www.lucidimagination.com

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to