Dear Alban,

thank you for your precious reply, first of all.

>> On 19 Sep 2017, at 15:47, Job <j...@colliniconsulting.it> wrote:
>>
>> Hi guys,
>>
>> we need to insert from a table to another (Postgresql 9.6.1) a large amount 
>> of data (about 10/20 millions of rows) without locking destination table.
>> Pg_bulkload is the fastest way but it locks the table.
>>
>> Are there other ways?
>> Classic "COPY" from?

>We do something like that using a staging table to load to initially (although 
>not bulk; data arrives in our staging table with batches of 5k to 100k rows) 
>and then we transfer the data using insert/select and "on conflict do".
>That data-transfer within PG takes a couple of minutes on our rather limited 
>VM for a wide 37M rows table (~37GB on disk). That only locks the staging 
>table (during initial bulkload) and the rows in the master table that are 
>currently being altered (during the insert/select).

We use a "temporary" table, populated by pg_bulkload - it takes few minutes in 
this first step.
Then, from the temporary table, datas are transferred by a trigger that copy 
the record into the production table.
But *this step* takes really lots of time (sometimes also few hours).
There are about 10 millions of record.

We cannot use pg_bulkload to load directly data into production table since 
pg_bulkload would lock the Whole table, and "COPY" command is slow and would 
not care about table partitioning (COPY command fire partitioned-table 
triggers).

Thank you for the help!

F


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to