"Joshua D. Drake" <j...@commandprompt.com> writes: > Couldn't you just commit each range of subtransactions based on some > threshold?
> COPY foo from '/tmp/bar/' COMMIT_THRESHOLD 1000000; > It counts to 1mil, commits starts a new transaction. Yes there would be > 1million sub transactions but once it hits those clean, it commits. Hmm, if we were willing to break COPY into multiple *top level* transactions, that would avoid my concern about XID wraparound. The issue here is that if the COPY does eventually fail (and there will always be failure conditions, eg out of disk space), then some of the previously entered rows would still be there; but possibly not all of them, depending on whether we batch rows. The latter property actually bothers me more than the former, because it would expose an implementation detail to the user. Thoughts? Also, this does not work if you want the copy to be part of a bigger transaction, viz BEGIN; do something; COPY ...; do something else; COMMIT; regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers