On 15/12/2009 12:35 PM, Mark Williamson wrote:
So what happened is, the above update never completed and the Postgresql
service consumed all available memory. We had to forcefully reboot the
machine
That means your server is misconfigured. PostgreSQL should never consume
all available memory. If it does, you have work_mem and/or
maintenance_work_mem set way too high, and you have VM overcommit
enabled in the kernel. You also have too much swap.
http://www.postgresql.org/docs/current/interactive/kernel-resources.html
http://www.network-theory.co.uk/docs/postgresql/vol3/LinuxMemoryOvercommit.html
I wouldn't be surprised if you had shared_buffers set too high as well,
and you have no ulimit set on postgresql's memory usage. All those
things add up to "fatal".
A properly configured machine should be able to survive memory
exhaustion caused by a user process fine. Disable VM overcommit, set a
ulimit on postgresql so it can't consume all memory, use a sane amount
of swap, and set sane values for work_mem and maintenance_work_mem.
Why does Postgresql NOT have a maximum memory allowed setting? We want
to allocate resources efficiently and cannot allow one customer to
impact others.
It does. "man ulimit".
The operating system can enforce it much better than PostgreSQL can. If
a Pg bug was to cause Pg to go runaway or try to allocate insane amounts
of RAM, the ulimit would catch it.
I *do* think it'd be nice to have ulimit values settable via
postgresql.conf so that you didn't have to faff about editing init
scripts, though.
( TODO item? )
--
Craig Ringer
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs