On Mon, Feb 9, 2009 at 1:57 PM, Stephen Frost <sfr...@snowman.net> wrote:
> * Scott Marlowe (scott.marl...@gmail.com) wrote:
>> I think that you're fixing a symptom, but ignoring the cause.
>> Twiddling VM parameters may help out, but this problem of too much
>> memory allocated is the real issue, so yeah, you're just putting off
>> the inevitable.
>
> I don't think changing work_mem down is actually going to reduce the
> memory allocated without changing the plan to something less optimal.
> In the end, all of this is putting off the inevitable, if you get enough
> PGs going and enough requests and whatnot, you're going to start running
> out of memory again.  Same if you get larger data sets that take up more
> hash table space or similar.  Eventually you might need a bigger box,
> but let's try to get everything in the current box to at least be used
> first..

That's not always true.  It's not uncommon to have the same plan just
spill to disk later.  If the dataset is big enough that you're gonna
need to spill to disk anyway, then the difference in performance is
often pretty small.  Again, it's why I was saying he needs to
benchmark ALL his queries, by weight, and see what work_mem gives
optimal results without catering to just one or two big queries.  If
needs be, set work_mem for the connection big and avoid the foot gun
that is large global work_mem.

Far better to waste a few gigs of memory than to start having midday
failures under max load due to oom conditions.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to