Yesterday a client and I were sad to discover that the overhead of constraint exclusion is apparently O(n) in the number of partitions, and that where we had ~180 partitions each with a simple constraint (check (field = nnn)) the overhead appeared to amount to about 0.25s on some quite performant hardware, which is way too high for our application. Actual execution of the query in question was talking one tenth of that time.

For now we're going to work around this by directing the queries directly to the child tables, although this does involve fairly large application changes.

However, I wondered if we couldn't mitigate this by caching the results of constraint exclusion analysis for a particular table + condition. I have no idea how hard this would be, but in principle it seems silly to keep paying the same penalty over and over again.

Thoughts?

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to