Rafal Pietrak wrote:

A plain INSERT of batch takes 5-10minutes on desktop postgresql (800MHz
machine, ATA disks). When I attach trigger (*Very* simple funciton) to
update the accounts, the INSERT take hours (2-4). But when I make just
one single update of all accounts at the end of the batch insert, it
takes 20-30min.

Why not have the INSERT go to an "inbox" table, a table whose only job is to receive the data for future processing.

Your client code should mark all rows with a batch number as they go in. Then when the batch is loaded, simply invoke a stored procedure to process them. Pass the stored procedure the batch number.

IOW, have your "background trigger" be a stored procedure that is invoked by the client, instead of trying to get the server to do it.
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to