I am getting a "Too Many Open Files" Exception.  I've read the FAQ about
lowering the merge factor (currently set to 25), issuing a ulimit -n
<number>, etc... but I am still getting the "Too Many Open Files"
Exception (yes... I'm making sure I close all writer/searchers/reader
and I only have one open at a time).  

 

Here is my situation:

 

I have one application dedicated to building the index on one JVM
(server 1).  I have several other applications doing the searching in
another JVM (server 2).  I have the merge merge docs set to 999999, max
buffered docs = 3000, and max merge factor = 25.  Server 2 is all set to
default.  I am getting the "Too Many Open Files" Exception on server
2... The index in question has 576,239 documents, 48 fields, and
6,521,618 terms (actually... this just turned into a two part question).


 

The first part would be why am I getting the "Too Many Open Files"
Exception.

 

The second part is: I have a total of 24 different indices.  Each index
is dedicated to one specific part of the application (ie: an index for
customers, index for products, index for media location, etc).  Each
index has its own searcher/reader (and only one searcher/reader per
index).  After server 1 finishes it's scheduled job to update the index,
it notifies server 2 to renew the appropriate searcher/reader.  Question
is should (could) I merge all 24 indices into one big index?  What is
the optimal thing to do (the indices range from 5MB - 900MB... most of
the are 30MB-50MB).   

 

I have two indices that are "large", one being 700MB and the other being
900MB.  Optimizing these two indices takes a long time, so I only
optimize it once (last scheduled job of the night).  The 700MB index is
scheduled to update every 4 hours, while the 900MB index is schedule to
update every 30 minutes.  Should I optimize more often throughout the
day (changes varies)?

 

I know this email is all over the place... so any feedback is
appreciated.

 

Thanks,

 

Van

 

 

Reply via email to