On 19/02/11 08:48, Josh Berkus wrote:
On 2/18/11 11:44 AM, Robert Haas wrote:
On Fri, Feb 18, 2011 at 2:41 PM, Josh Berkus<j...@agliodbs.com>  wrote:
Second, the main issue with these sorts of macro-counters has generally
been their locking effect on concurrent activity.  Have you been able to
run any tests which try to run lots of small externally-sorted queries
at once on a multi-core machine, and checked the effect on throughput?
Since it's apparently a per-backend limit, that doesn't seem relevant.
Oh!  I missed that.

What good would a per-backend limit do, though?

And what happens with queries which exceed the limit?  Error message?  Wait?



By "temp files" I mean those in pgsql_tmp. LOL - A backend limit will have the same sort of usefulness as work_mem does - i.e stop a query eating all your filesystem space or bringing a server to its knees with io load. We have had this happen twice - I know of other folks who have too.

Obviously you need to do the same sort of arithmetic as you do with work_mem to decide on a reasonable limit to cope with multiple users creating temp files. Conservative dbas might want to set it to (free disk)/max_connections etc. Obviously for ad-hoc systems it is a bit more challenging - but having a per-backend limit is way better than having what we have now, which is ... errr... nothing.

As an example I'd find it useful to avoid badly written queries causing too much io load on the db backend of (say) a web system (i.e such a system should not *have* queries that want to use that much resource).

To answer the other question, what happens when the limit is exceeded is modeled on statement timeout, i.e query is canceled and a message says why (exceeded temp files size).

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to