On Thu, May 24, 2007 at 05:34:00PM +0200, CĂ©lestin HELLEU wrote:
> Well, with any database, if I had to insert 20 000 000 record in a table, I 
> wouldntt do it in one transaction, it makes very big intermediate file, and 
> the commit at the end is really heavy.
> I would cut the transaction in midi-transaction, of let's say 1000 records.

Postgres does not create an intermediate file and the cost of commit is
independant of the number of statements within the transaction.
Postgres uses a form of MVCC which means you get costs for rollback,
but commit is very cheap.

I beleive your costs are down to the fact that there are 20 000 000
statements. There is a cost per statement, so if you can write your
function to do less statements, you're better off...

> FOR all IN (select * from TABLE1)
> LOOP
>       FOR some IN (select * from)
>       LOOP
>               INSERT INTO TABLE2 VALUES (all.id, some.id)
>       END LOOP
> END LOOP

I'd replace the whole loop with a single INSERT statement:

INSERT INTO TABLE2 SELECT all.id, some.id FROM all, some WHERE...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.

Attachment: signature.asc
Description: Digital signature

Reply via email to