On 11/26/2014 05:00 PM, Andrew Dunstan wrote:

Attached is some anonymized DDL for a fairly complex schema from a PostgreSQL Experts client. Also attached is an explain query that runs against the schema. The client's problem is that in trying to run the explain, Postgres simply runs out of memory. On my untuned 9.3 test rig, (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly shows the explain chewing up about 7Gb of memory. When it's done the free memory jumps back to where it was. On a similar case on the clients test rig we saw memory use jump lots more.

The client's question is whether this is not a bug. It certainly seems like it should be possible to plan a query without chewing up this much memory, or at least to be able to limit the amount of memory that can be grabbed during planning. Going from humming along happily to OOM conditions all through running "explain <somequery>" is not very friendly.


Further data point - thanks to Andrew Gierth (a.k.a. RhodiumToad) for pointing this out. The query itself grabs about 600Mb to 700Mb to run, whereas the EXPLAIN takes vastly more - on my system 10 times more. Surely that's not supposed to happen?

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to