Michael Brusser <[EMAIL PROTECTED]> writes:
> Apparently we managed to run out of the open file descriptors on the host
> machine.

This is pretty common if you set a large max_connections value while
not doing anything to raise the kernel nfile limit.  Postgres will
follow what the kernel tells it is a safe number of open files per
process, but far too many kernels lie through their teeth about what
they can support :-(

You can reduce max_files_per_process in postgresql.conf to keep Postgres
from believing what the kernel says.  I'd recommend making sure that
max_connections * max_files_per_process is comfortably less than the
kernel nfiles setting (don't forget the rest of the system wants to have
some files open too ;-))

> I wonder how Postgres handles this situation.
> (Or power outage, or any hard system fault, at this point)

Theoretically we should be able to recover from this without loss of
committed data (assuming you were running with fsync on).  Is your QA
person certain that the record in question had been written by a
successfully-committed transaction?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to