If you are on linux the number of file handles for a session is much lower than 
that for the whole machine.  "ulimit -n" will tell you.  There are instructions 
on the web for changing this setting, it involves the /etc/security/limits.conf 
and setting the values for "nofile".

(bulkadm is my user)

bulkadm         soft    nofile          8192
bulkadm         hard    nofile          65536

Also, if you use the condensed file format you will have many fewer files.

-----Original Message-----
From: Neelam Bhatnagar [mailto:[EMAIL PROTECTED]
Sent: Monday, November 22, 2004 10:02 AM
To: Otis Gospodnetic
Cc: [EMAIL PROTECTED]
Subject: Too many open files issue


Hi,
 
I had requested help on an issue we have been facing with the "Too many
open files" Exception garbling the search indexes and crashing the
search on the web site. 
As a suggestion, you had asked us to look at the articles on O'Reilly
Network which had specific context around this exact problem. 
One of the suggestions was to increase the limit on the number of file
descriptors on the file system. We tried it by first lowering the limit
to 200 from 256 in order to reproduce the exception. The exception did
get reproduced but even after increasing the limit to 500, the exception
kept coming until after several rounds of trying to rebuild the index,
we finally got to get it working for the default file descriptor limit
of 256.  This makes us wonder if your first suggestion of optimizing
indexes is a pre-requisite to trying this option. 
 
Another piece of relevant information is that we have the default merge
factor of 10.
 
Kindly give us pointers to what it that we are doing wrong or should we
be trying something completely different.
 
Thanks and regards
Neelam Bhatnagar
 

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to