Justin Pitts wrote:
I don't know if I would call it "terribly" ugly. Its not especially pretty, but it affords the needed degree of twiddling to get the job done. Relying on the clients is fine - if you can. I suspect the vast majority of DBAs would find that notion unthinkable. The usual result of a memory overrun is a server crash.


It's probably OK in this context: the multiple clients are all instances of the same perl script, running particular, pre-defined queries. So we can trust them not to ask a really memory-intensive query.

Besides which, if you can't trust the clients to ask sensible queries, why can you trust them to set their own work_mem values?

Richard




On Nov 20, 2009, at 4:39 PM, Richard Neill wrote:

Justin Pitts wrote:
Set work_mem in postgresql.conf down to what the 200 clients need, which sounds to me like the default setting.
In the session which needs more work_mem, execute:
SET SESSION work_mem TO '256MB'

Isn't that terribly ugly? It seems to me less hackish to rely on the many clients not to abuse work_mem (as we know exactly what query they will run, we can be sure it won't happen).

It's a shame that the work_mem parameter is a per-client one, rather than a single big pool.

Richard

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to