Greetings,
We're seeing numerous "LOG: out of file descriptors: Too many open files in
system; release and retry" entries as well as quite a few "LOG: could not
open temporary statistics file "global/pgstat.tmp": Too many open files in
system"
Much more alarming however, we're seeing errors such
2009/8/12 Jonatan Evald Buus :
> Greetings,
> We're seeing numerous "LOG: out of file descriptors: Too many open files in
> system; release and retry" entries as well as quite a few "LOG: could not
> open temporary statistics file "global/pgstat.tmp": Too many open files in
> system"
> Much more
since the message is "*out of file descriptors: Too many open files in
system; release and retry*" i think you should also be checking the system
ulimit.
try commands like "ulimit -a" to check the number of files currently open in
the system. Also try "ulimit -Hn" to get "The maximum number of op
"ulimit -Hn" shows 11095 which is the same as kern.maxfilesperproc.
"ulimit -a" shows the following
socket buffer size (bytes, -b) unlimited
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) 524288
file size (blocks, -f) unlimited
max locked me
Jonatan Evald Buus wrote:
> [too many files open in system]
> Any insight or suggestions as to where to start digging would be
> greatly appreciated
You didn't start truncating temporary tables inside a loop somewhere,
did you?
http://archives.postgresql.org/pgsql-hackers/2009-08/msg00444
Nope, though that sounds fun!! ;-)
The application hasn't really changed much over the past 6 months either
(and definitively not its use of the database which is mainly INSERT queries
used for logging a few SELECTS for sharing data between the clustered
application nodes, all in all pretty simply
Jonatan Evald Buus writes:
> Is it normal for PostGreSQL to have close to 5000 file handles open while
> running?
It can be, if you have enough active backends and enough tables that
they are touching. You have not provided nearly enough information to
gauge what the expected number of actual op
Cheers for the insight Tom,
We generally have anywhere between 60 and 100 active connections to Postgres
under normal load, but at peak times this may increase hence the
max_connections = 256.
There're several databases in the Postgres cluster so an estimate would be
approximately 200 tables in tot
Jonatan Evald Buus writes:
> Also, what would a reasonable setting for "max_files_per_process" based on a
> machine with 2GB RAM running FreeBSD 7.1 be?
> The comments mention that "max_files_per_process may" be set as low as 25
> but what would the implications of this restriction be?
At some po