Slavisa Garic wrote:
> Using pg module in python I am trying to run the COPY command to populate
> the large table. I am using this to replace the INSERT which takes about
> few hours to add 70000 entries where copy takes minute and a half. 

That difference in speed seems quite large.  Too large.  Are you batching
your INSERTs into transactions (you should be in order to get good
performance)?  Do you have a ton of indexes on the table?  Does it have
triggers on it or some other thing (if so then COPY may well wind up doing
the wrong thing since the triggers won't fire for the rows it inserts)?

I don't know what kind of schema you're using, but it takes perhaps a
couple of hours to insert 2.5 million rows on my system.  But the rows
in my schema may be much smaller than yours.


-- 
Kevin Brown                                           [EMAIL PROTECTED]

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to