Recently two systems here have suffered severely with excessive temporary file creation during query execution. In one case it could have been avoided by more stringent qa before application code release, whereas the other is an ad-hoc system, and err...yes.

In both cases it would have been great to be able to constrain the amount of temporary file space a query could use. In theory you can sort of do this with the various ulimits, but it seems pretty impractical as at that level all files look the same and you'd be just as likely to unexpectedly crippled the entire db a few weeks later when a table grows...

I got to wonder how hard this would be to do in Postgres, and attached is my (WIP) attempt. It provides a guc (max_temp_files_size) to limit the size of all temp files for a backend and amends fd.c cancel execution if the total size of temporary files exceeds this.

This is WIP, it does seem to work ok, but some areas/choices I'm not entirely clear about are mentioned in the patch itself. Mainly:

- name of the guc... better suggestions welcome
- datatype for the guc - real would be good, but at the moment the nice parse KB/MB/GB business only works for int

regards

Mark

Attachment: temp-files-v1.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to