* Tom Lane ([EMAIL PROTECTED]) wrote: > Greg Stark <[EMAIL PROTECTED]> writes: > > It doesn't seem like a bad idea to have a max_memory parameter that if a > > backend ever exceeded it would immediately abort the current > > transaction. > > See ulimit (or local equivalent).
As much as setting ulimit in shell scripts is fun, I have to admit that
I really don't see it happening very much. Having Postgres set a ulimit
for itself may not be a bad idea and would perhaps provide a "least
suprise" for new users. Perhaps shared_buffers + 10*work_mem +
maintenance_work_mem + max_stack_depth? Then errors from running out of
memory could provide a 'HINT: Memory consumption went well over allowed
work_mem, perhaps you need to run ANALYZE or raise work_mem?'.
Just some thoughts,
Stephen
signature.asc
Description: Digital signature
