Even Oracle requires 65536; MySQL+MyISAM depends on number of tables, indexes, and Client Threads.

From my experience with Lucene, 8192 is not enough; leave space for OS too.

Multithreaded application (in most cases) multiplies number of files to a number of threads (each thread needs own handler), in case with SOLR-Tomcat: 256 threads... Number of files depends on mergeFactor=10 (default for SOLR). Now, if 10 is "merge factor" and we have *.cfs, *.fdt, etc (6 file types per segment):
256*10*6 = 15360 (theoretically)

==============
http://www.linkedin.com/in/liferay


Quoting Brian Carmalt <[EMAIL PROTECTED]>:

Hello,

I have a similar problem, not with Solr, but in Java. From what I have
found, it is a usage and os problem: comes from using to many files, and
the time it takes the os to reclaim the fds. I found the recomendation
that System.gc() should be called periodically. It works for me. May not
be the most elegant, but it works.

Brian.

Am Montag, den 14.07.2008, 11:14 +0200 schrieb Alexey Shakov:
now we have set the limt to ~10000 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.


Fuad Efendi schrieb:
> Have you tried [ulimit -n 65536]? I don't think it relates to files
> marked for deletion...
> ==============
> http://www.linkedin.com/in/liferay
>
>
>> Earlier or later, the system crashes with message "Too many open files"
>
>








Reply via email to