On 09/07/10 05:10, Josh Berkus wrote:
Simon, Mark,

Actually only 1 lock check per query, but certainly extra processing and
data structures to maintain the pool information... so, yes certainly
much more suitable for DW (AFAIK we never attempted to measure the
additional overhead for non DW workload).

I recall testing it when the patch was submitted for 8.2., and the overhead was substantial in the worst case ... like 30% for an in-memory one-liner workload.


Interesting - quite high! However I recall you tested the initial committed version, later additions dramatically reduced the overhead (what is in the Bizgres repo *now* is the latest).

I've been going over the greenplum docs and it looks like the attempt to ration work_mem was dropped. At this point, Greenplum 3.3 only rations by # of concurrent queries and total cost. I know that work_mem rationing was in the original plans; what made that unworkable?


That certainly was my understanding too. I left Greenplum about the time this was being discussed, and I think the other staff member involved with the design left soon afterwards as well, which might have been a factor!

My argument in general is that in the general case ... where you can't count on a majority of long-running queries ... any kind of admission control or resource management is a hard problem (if it weren't, Oracle would have had it before 11). I think that we'll need to tackle it, but I don't expect the first patches we make to be even remotely usable. It's definitely not an SOC project.

I should write more about this.


+1

Cheers

Mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to