On Mon, Jul 6, 2009 at 11:56 AM, Simon Riggs<si...@2ndquadrant.com> wrote:
> How will you cope with a large COPY? Surely there can be more than one
> concurrent insert from any backend?

He only needs to handle inserts for the period they're actively being
inserted into the index. Once they're in the index he'll find them
using the index scan. In other words this is all a proxy for the way
btree locks index pages while it looks for a unique key violation.

I'm a bit concerned about the use of tid. You might have to look at a
lot of heap pages to check for conflicts. I suppose they're almost
certainly all in shared memory though. Also, it sounds like you're
anticipating the possibility of dead entries in the array but if you
do then you need to store the xmin also to protect against a tuple
that's been vacuumed and had its line pointer reused since. But I
don't see the necessity for that anyways since you can just clean up
the entry on abort.


-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to