On Sun, 2009-06-21 at 12:38 -0400, Tom Lane wrote:
> Greg Stark <gsst...@mit.edu> writes:
> > There was some discussion of doing this in general for all inserts
> > inside the indexam. The btree indexam could buffer up any inserts done
> > within the transaction and keep them in an in-memory btree. Any
> > lookups done within the transaction first look up in the in-memory
> > tree then the disk. If the in-memory buffer fills up then we flush
> > them to the index.
> 
> > The reason this is tempting is that we could then insert them all in a
> > single index-merge operation which would often be more efficient than
> > retail inserts.
> 
> That's not gonna work for a unique index, which unfortunately is a
> pretty common case ...

I think it can. If we fail on a unique index we fail. We aren't
expecting that, else we wouldn't be using COPY. So I reckon its
acceptable to load a whole block of rows and then load a whole blocks's
worth of index entries. The worst thing that can happen is we insert a
few extra heap rows that get aborted, which is small in comparison to
the potential gains from buffering.

-- 
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to