On Fri, Jun 1, 2012 at 8:45 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Merlin Moncure <mmonc...@gmail.com> writes:
>> A potential issue with this line of thinking is that your pin delay
>> queue could get highly pressured by outer portions of the query (as in
>> the OP's case)  that will get little or no benefit from the delayed
>> pin.  But choosing a sufficiently sized drain queue would work for
>> most reasonable cases assuming 32 isn't enough?  Why not something
>> much larger, for example the lesser of 1024, (NBuffers * .25) /
>> max_connections?  In other words, for you to get much benefit, you
>> have to pin the buffer sufficiently more than 1/N times among all
>> buffers.
>
> Allowing each backend to pin a large fraction of shared buffers sounds
> like a seriously bad idea to me.  That's just going to increase
> thrashing of what remains.

By 'large fraction', you mean 25%?  You could always set it lower, say
5%.   But if you can be smarter about which buffers to put in I agree:
a smaller queue is better.

> More generally, I don't believe that we have any way to know which
> buffers would be good candidates to keep pinned for a long time.
> Typically, we don't drop the pin in the first place if we know we're
> likely to touch that buffer again soon.  btree root pages might be an
> exception, but I'm not even convinced of that one.

Why not (unless Florian's warming concept is a better bet) hook it to
spinlock contention? That's what we're trying to avoid after all.
s_lock can be modified to return if it had to delay.  PinBuffer could
watch for that and stick it in the queue.   Both Florin's idea (AIUI)
or a s_lock based implementation require you to search your local
queue on every pin/unpin, which i think is the real cost.  Robert's
doesn't, although it is a more complicated approach.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to