On 20.05.2013 16:59, Robert Haas wrote:
On Fri, May 17, 2013 at 3:38 AM, Heikki Linnakangas
<hlinnakan...@vmware.com>  wrote:
If we could use the catchup interrupts to speed that up though, that would
be much better. I think vacuum could simply send a catchup interrupt, and
wait until everyone has caught up. That would significantly increase the
traffic of sinval queue and catchup interrupts, compared to what it is
today, but I think it would still be ok. It would still only be a few sinval
messages and catchup interrupts per truncation (ie. per vacuum).

Hmm.  So your proposal is to only send these sinval messages while a
truncation is in progress, not any time the relation is extended?
That would certainly be far more appealing from the point of view of
not blowing out sinval.

Right.

It shouldn't be difficult to restrict the set of backends that have to
be signaled to those that have the relation open.  You could have a
special kind of catchup signal that means "catch yourself up, but
don't chain"

What does "chain" mean above?

- and send that only to those backends returned by
GetConflictingVirtualXIDs.

Yeah, that might be a worthwhile optimization.

It might be better to disconnect this mechanism from sinval entirely.
In other words, just stick a version number on your shared memory
data structure and have everyone advertise the last version number
they've seen via PGPROC.  The sinval message doesn't really seem
necessary; it's basically just a no-op message to say "reread shared
memory", and a plain old signal can carry that same message more
efficiently.

Hmm. The sinval message makes sure that when a backend locks a relation, it will see the latest value, because of the AcceptInvalidationMessages call in LockRelation. If there is no sinval message, you'd need to always check the shared memory area when you lock a relation.

One other thought regarding point 6 from your original proposal.  If
it's true that a scan could hold a pin on a buffer above the current
hard watermark, which I think it is, then that means there's a scan in
progress which is going to try to fetch pages beyond that also, up to
whatever the end of file position was when the scan started.  I
suppose that heap scans will need to be taught to check some
backend-local flag before fetching each new block, to see if a hard
watermark change might have intervened.  Is that what you have in
mind?

Yes, heap scan will need to check the locally cached high watermark value every time when stepping to a new page.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to