On Mon, Apr 10, 2017 at 2:46 PM, Alexey Kondratov
<kondratov.alek...@gmail.com> wrote:
> Yes, sure, I don't doubt it. The question was around step 4 in the following 
> possible algorithm:
>
> 1. Suppose we have to insert N records
> 2. Start subtransaction with these N records
> 3. Error is raised on k-th line
> 4. Then, we know that we can safely insert all lines from the 1st till (k - 1)
> 5. Report, save to errors table or silently drop k-th line
> 6. Next, try to insert lines from (k + 1) till Nth with another subtransaction
> 7. Repeat until the end of file
>
> One can start subtransaction with those (k - 1) safe-lines and repeat it 
> after each error line

I don't understand what you mean by that.

> OR
> iterate till the end of file and start only one subtransaction with all lines 
> excepting error lines.

That could involve buffering a huge file.  Imagine a 300GB load.

Also consider how many XIDs whatever design is proposed will blow
through when loading 300GB of data.  There's a nasty trade-off here
between XID consumption (and the aggressive vacuums it eventually
causes) and preserving performance in the face of errors - e.g. if you
make k = 100,000 you consume 100x fewer XIDs than if you make k =
1000, but you also have 100x the work to redo (on average) every time
you hit an error.  If the data quality is poor (say, 50% of lines have
errors) it's almost impossible to avoid runaway XID consumption.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to