Re: [Dhis2-users] Too Many Open Files?

2016-09-07 Thread Bob Jolliffe
Hi Jason Halvdan is right - these are more likely file descriptors associated with sockets. The limit you have dug up is the kernel limit (ie total no of "files" that the kernel can have open in total) which is probably not the limit you are reaching. If you have a very busy server you can have

Re: [Dhis2-users] Too Many Open Files?

2016-09-06 Thread Halvdan Hoem Grelland
Hi Jason, ‘Files’ in this context is really a file descriptor, which is any connectable resource in the system, including both files and sockets. I do agree that it’s unlikely that Tomcat hogs 6.5 million FDs, but I’m not convinced that is the real limit either. You could try: ulimit -n To get

[Dhis2-users] Too Many Open Files?

2016-09-05 Thread Jason Phillips
Hi all, We have a system that tanked last night, reporting: 14-Aug-2016 19:13:57.440 SEVERE [http-nio-8080-Acceptor-0] org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept failed java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Metho