This type of problem, is most often caused by failing to put close() calls
in 'finally blocks'.  All resource management in Java always needs to
handled in finally blocks, or else resource leaks can easily bring a server
down from memory or handles issues. There may be places in Oak source that
isn't doing this, IDK. Of course the other possibility Dirk is that your
own code may not even be calling close() on everything it needs to.

Also same is true for session.logout(). Make sure you aren't holding open a
bunch of sessions at once (like on multiple threads, or just from failing
to call logout() in some loop that does processing).  And finally make sure
when you process large batches of operations that you don't build up too
many changes before doing a commit. session.save(). I have heard of apps
having problems in the past simply by trying to do too much in the same
"commit".


Best regards,
Clay Ferguson
[email protected]


On Tue, Mar 28, 2017 at 9:00 AM, Dirk Rudolph <[email protected]>
wrote:

> Hi,
>
> we recently faced the issue that our Oak based enterprise content
> management system run into failures due to too much open files. Monitoring
> the lsof output we found out that most of the opened files of the process
> are the files within the configured localIndexDir of the
> LuceneIndexProviderService. We have copyonread and copyonwrite enabled.
>
> Are there any know limitations with handling open files related to those 2
> options? If so, I naively would expect the implementation to manage file
> handles following kind of a LRU pattern and to allow configuring a maximum
> amount of file handles to use.
>
> Talking in numbers we have, after a fresh restart of the process about 20k
> open files, 13k are index files, 2.5k segmentstore and most of the others
> jar files. The ulimit is already set to more then 65k but the instance
> crashed with more then 75k open file handles.
>
> Many thanks in advance,
>
> /Dirk
>
>
>

Reply via email to