Re: [HACKERS] Re: Too many open files (was Re: spinlock problems reported earlier)
Tatsuo Ishii <[EMAIL PROTECTED]> writes: >> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS, >> with a default value of about 100. A new backend would set its >> max-files setting to the smaller of this parameter or >> sysconf(_SC_OPEN_MAX). > Seems nice idea. We have been heard lots of problem reports caused by > ruuning out of the file table. > However it would be even nicer, if it could be configurable at runtime > (at the postmaster starting up time) like -N option. Yes, what I meant was a GUC parameter named MAX_FILES_PER_PROCESS. You could set it via postmaster.opts or postmaster command line switch. regards, tom lane
Re: [HACKERS] Re: Too many open files (was Re: spinlock problems reported earlier)
* Tom Lane <[EMAIL PROTECTED]> [001223 14:16] wrote: > Department of Things that Fell Through the Cracks: > > Back in August we had concluded that it is a bad idea to trust > "sysconf(_SC_OPEN_MAX)" as an indicator of how many files each backend > can safely open. FreeBSD was reported to return 4136, and I have > since noticed that LinuxPPC returns 1024. Both of those are > unreasonably large fractions of the actual kernel file table size. > A few dozen backends opening hundreds of files apiece will fill the > kernel file table on most Unix platforms. getdtablesize(2) on BSD should tell you the per-process limit. sysconf on FreeBSD shouldn't lie to you. getdtablesize should take into account limits in place. later versions of FreeBSD have a sysctl 'kern.openfiles' which can be checked to see if the system is approaching the systemwide limit. -- -Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]] "I have the heart of a child; I keep it in a jar on my desk."
Re: [HACKERS] Re: Too many open files (was Re: spinlock problems reported earlier)
Peter Eisentraut <[EMAIL PROTECTED]> writes: > Maybe a setting that controls the total number of files that postmaster > plus backends can allocate among them would be useful. That'd be nice if we could do it, but I don't see any inexpensive way to get one backend to release an open FD when another one needs one. So, divvying up the limit on an N-per-backend basis seems like the most workable approach. regards, tom lane
Re: [HACKERS] Re: Too many open files (was Re: spinlock problems reported earlier)
Peter Eisentraut <[EMAIL PROTECTED]> writes: > Tom Lane writes: >> I'm not sure why this didn't get dealt with, but I think it's a "must >> fix" kind of problem for 7.1. The dbadmin has *got* to be able to >> limit Postgres' appetite for open file descriptors. > Use ulimit. Even if ulimit exists and is able to control that parameter on a given platform (highly unportable assumptions), it's not really a workable answer. fd.c has to stop short of using up all of the actual nfile limit, or else stuff like the dynamic loader is likely to fail. > I think this is an unreasonable interference with the customary operating > system interfaces (e.g., ulimit). The last thing I want to hear is > "Postgres is slow and it only opens 100 files per process even though I > to allow 32 million." (1) A dbadmin who hasn't read the run-time configuration doc page (that you did such a nice job with) is going to have lots of performance issues besides this one. (2) The last thing *I* want to hear is stories of a default Postgres installation causing system-wide instability. But if we don't insert an open-files limit that's tighter than the "customary operating system limit", that's exactly the situation we have, at least on several popular platforms. regards, tom lane
[HACKERS] Re: Too many open files (was Re: spinlock problems reported earlier)
Tom Lane wrote: > > Department of Things that Fell Through the Cracks: > > Back in August we had concluded that it is a bad idea to trust > "sysconf(_SC_OPEN_MAX)" as an indicator of how many files each backend > can safely open. FreeBSD was reported to return 4136, and I have > since noticed that LinuxPPC returns 1024. Both of those are > unreasonably large fractions of the actual kernel file table size. > A few dozen backends opening hundreds of files apiece will fill the > kernel file table on most Unix platforms. > > I'm not sure why this didn't get dealt with, but I think it's a "must > fix" kind of problem for 7.1. The dbadmin has *got* to be able to > limit Postgres' appetite for open file descriptors. > > I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS, > with a default value of about 100. A new backend would set its > max-files setting to the smaller of this parameter or > sysconf(_SC_OPEN_MAX). > > An alternative approach would be to make the parameter be total open files > across the whole installation, and divide it by MaxBackends to arrive at > the per-backend limit. However, it'd be much harder to pick a reasonable > default value if we did it that way. > > Comments? On Linux, at least, the 1024 file limit is a per process limit, the system wide limit defaults to 4096 and can be easily changed by echo 16384 > /proc/sys/fs/file-max (16384 is arbitrary and can be much larger) I am all for having the ability to tune behavior over the system reported values, but I think it should be an option which defaults to the previous behavior. -- http://www.mohawksoft.com
Re: [HACKERS] Re: Too many open files (was Re: spinlock problems reported earlier)
Department of Things that Fell Through the Cracks: Back in August we had concluded that it is a bad idea to trust "sysconf(_SC_OPEN_MAX)" as an indicator of how many files each backend can safely open. FreeBSD was reported to return 4136, and I have since noticed that LinuxPPC returns 1024. Both of those are unreasonably large fractions of the actual kernel file table size. A few dozen backends opening hundreds of files apiece will fill the kernel file table on most Unix platforms. I'm not sure why this didn't get dealt with, but I think it's a "must fix" kind of problem for 7.1. The dbadmin has *got* to be able to limit Postgres' appetite for open file descriptors. I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS, with a default value of about 100. A new backend would set its max-files setting to the smaller of this parameter or sysconf(_SC_OPEN_MAX). An alternative approach would be to make the parameter be total open files across the whole installation, and divide it by MaxBackends to arrive at the per-backend limit. However, it'd be much harder to pick a reasonable default value if we did it that way. Comments? regards, tom lane