Hmmm, we keep open a number of tlog files based on the number of
records in each file (so we always have a certain amount of history),
but IIRC, the number of tlog files is also capped.  Perhaps there is a
bug when the limit to tlog files is reached (as opposed to the number
of documents in the tlog files).

I'll see if I can create a test case to reproduce this.

Separately, you'll get a lot better performance if you don't commit
per update of course (or at least use something like commitWithin).

-Yonik
http://lucidworks.com

On Wed, May 15, 2013 at 5:06 PM, Steven Bower <sbo...@alcyon.net> wrote:
> We have a system in which a client is sending 1 record at a time (via REST)
> followed by a commit. This has produced ~65k tlog files and the JVM has run
> out of file descriptors... I grabbed a heap dump from the JVM and I can see
> ~52k "unreachable" FileDescriptors... This leads me to believe that the
> TransactionLog is not properly closing all of it's files before getting rid
> of the object...
>
> I've verified with lsof that indeed there are ~60k tlog files that are open
> currently..
>
> This is Solr 4.3.0
>
> Thanks,
>
> steve

Reply via email to