On 28.06.2011 20:47, Kevin Grittner wrote:
(3)  Heap tuples are locked so that updates or deletes by an
overlapping transaction of the tuple which has been read can be
detected as a rw-conflict.  Keep in mind that access for such a
delete or update may not go through the same index on which the
conflicting read occurred.  It might use a different index or a
seqscan.  These may be promoted to page or heap relation locks to
control the shared space used by predicate locks, but the concept is
the same -- we're locking actual tuples read, not any gaps.

Ok, that's what I was missing. So the predicate locks on heap tuples are necessary. Thanks for explaining this again.

So, the heap relation lock is clearly needed for the seqscan.  There
is room for performance improvement there in skipping the tuple lock
attempt when we're in a seqscan, which will always be a no-op when it
finds the heap relation lock after a hash table lookup.  But you are
also questioning whether the predicate locking of the tuples where
rs_relpredicatelocked is tested can be removed entirely, rather than
conditioned on the boolean.  The question is: can the code be reached
on something other than a seqscan of the heap, and can this happen
for a non-temporary, non-system table using a MVCC snapshot?

I've been trying to work backward to all the spots which call these
functions, directly or indirectly to determine that.  That's
obviously not trivial or easy work, and I fear that unless someone
more familiar with the code than I can weigh in on that question for
any particular PredicateLockTuple() call, I would rather leave the
calls alone for 9.1 and sort this out in 9.2.  I'm confident that
they don't do any damage where they are; it's a matter of very
marginal performance benefit (skipping a call to a fast return) and
code tidiness (not making unnecessary calls).

Hmm, the calls in question are the ones in heapgettup() and heapgettup_pagemode(), which are subroutines of heap_getnext(). heap_getnext() is only used in sequential scans, so it seems safe to remove those calls.

(2) In reviewing the above, Heikki noticed that there was a second
place in the executor that SSI calls were needed but missing. I
submitted a patch here:


http://archives.postgresql.org/message-id/4e07550f020000250003e...@gw.wicourts.gov

I wonder, though, whether the section of code which I needed to
modify should be moved to a new function in heapam.c on modularity
grounds.

If these two places were moved, there would be no SSI calls from
any source file in the executor subdirectory.

Same here, we might not need those PredicateLockTuple calls in
bitmap heap scan at all. Can you check my logic, and verify if
those PredicateLockTuple() calls are needed?

These sure look like they are needed per point (3) above.

Yep.

I would
like to add a test involving a lossy bitmap scan.  How many rows are
normally needed to force a bitmap scan to be lossy?

The size of bitmaps is controlled by work_mem, so you can set work_mem very small to cause them to become lossy earlier. Off the top of my head I don't have any guesstimate on how many rows you need.

 What's the
easiest way to check whether a plan is going to use (or is using) a
lossy bitmap scan?

Good question. There doesn't seem to be anything in the EXPLAIN ANALYZE output to show that, so I think you'll have to resort to adding some elog()s in the right places.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to