Robert Haas <robertmh...@gmail.com> writes:
> On Thu, Oct 8, 2009 at 11:50 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> I wonder whether we could break down COPY into sub-sub
>> transactions to work around that...

> How would that work?  Don't you still need to increment the command counter?

Actually, command counter doesn't help because incrementing the CC
doesn't give you a rollback boundary between rows inserted before it
and afterwards.  What I was vaguely imaging was

        -- outer transaction for whole COPY

        -- sub-transactions that are children of outer transaction

        -- sub-sub-transactions that are children of sub-transactions

You'd eat a sub-sub-transaction per row, and start a new sub-transaction
every 2^32 rows.

However, on second thought this really doesn't get us anywhere, it just
moves the 2^32 restriction somewhere else.  Once the outer transaction
gets to be more than 2^31 XIDs old, the database is going to stop
because of XID wraparound.

So really we have to find some way to only expend one XID per failure,
not one per row.

Another approach that was discussed earlier was to divvy the rows into
batches.  Say every thousand rows you sub-commit and start a new
subtransaction.  Up to that point you save aside the good rows somewhere
(maybe a tuplestore).  If you get a failure partway through a batch,
you start a new subtransaction and re-insert the batch's rows up to the
bad row.  This could be pretty awful in the worst case, but most of the
time it'd probably perform well.  You could imagine dynamically adapting
the batch size depending on how often errors occur ...

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to